自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+

  • 博客(0)
  • 资源 (56)
  • 收藏
  • 关注

空空如也

精通Oracle10编程.pdf

Oracle公司成立以来,从最初的数据库版本到Oracle7、Oracle8i、Oracle9i,Oracle10g到Oracle11g,虽然每一个版本之间的操作都存在一定的差别,但是Oracle对数据的操作基本上都遵循SQL标准。因此对Oracle开发来说版本之间的差别不大。 很多人没有学习Oracle就开始发怵,因为人们在误解Oracle,认为Oracle太难学了,认为Oracle不是一般人用的数据库,其实任何数据库对应用程序研发人员来说,都是大同小异,因为目前多数数据库都支持标准的SQL。在Oracle这本书中,我们能学习到:  Oracle的安装  Oracle数据管理  常用子查询及常用函数  PL/SQL编程  Oracle基本管理

2012-10-17

oracle 10g 教程.pdf

Oracle数据库是Oracle(中文名称叫甲骨文)公司的核心产品,Oracle数据库是一个适合于大中型企业的数据库管理系统。在所有的数据库管理系统中(比如:微软的SQL Server,IBM的DB2等),Oracle的主要用户涉及面非常广,包括:银行、电信、移动通信、航空、保险、金融、电子商务和跨国公司等。Oracle产品是免费的,可以在Oracle官方网站上下载到安装包,另一方面Oracle服务是收费的。

2012-10-17

Oracle经典教程.doc

在第一学期我们已经接触过关系型数据库SQL Server,对数据库、表、记录、表的增删改查操作等这些基本的概念已经了解。Oracle是基于对象的关系型数据库,Oracle也是用表的形式对数据存储和管理,并且在Oracle的操作中添加了一些面向对象的思想。

2012-10-17

JSP大文件上传控件-access-utf8

HttpUploader4全面升级了文件IO组件。新的IO组件在处理磁盘中的文件时,将不必再对文件执行I/O操作,这意味着在对文件进行处理时将不必再为文件申请并分配缓存,所有的文件缓存操作均由系统直接管理,由于取消了将文件数据加载到内存、数据从内存到文件的回写以及释放内存块等步骤,使得HttpUploader4在处理TB级数据时能够拥有闪电般的速度。 新的IO组件赋予了HttpUploader4更强的大数据处理能力。现在HttpUploader4在对GB级文件进行MD5校验时速度提高了4倍。同时CPU占用率更低。 HttpUploader4更加注重对硬盘的保护,在HttpUploader4中不再直接对文件进行I/O操作,而是在内存中对文件进行操作,所以不仅极大的减少了对硬盘的读写次数,同时速度却变的更快了。 借助于HttpUploader4企业能够帮助用户更加轻松的处理工作中的文件,让用户与用户之间的沟通更加的高效。从根本上提高企业竞争力。 考虑到不同的企业使用的开发平台不同,我们已经为企业开发人员提供了完整的与数据库相结合的示例(ASP.NET,JSP,PHP)。开发人员能够非常容易的在自已的系统中实现断点续传功能。 产品特点如下: 1. 为TB级文件提供稳定传输功能。 2. 优化MD5组件,文件扫描速度提升70%。 3. 保护磁盘,上传超大文件时,磁盘IO次数降低50%。 4. 采用全新设计IO组件,上传任意文件大小时始终占用128KB内存。 5. 支持文件及文件夹拖拽上传功能。 6. 支持文件批量上传。 7. 支持文件夹上传。 8. 基于标准HTTP协议。 9. 免费提供JavaScript SDK包,方便您将插件快速集成到已有网站中。 支持语言:PHP,JSP,ASP,ASP.NET(C#),ASP.NET(VB),C++,VC,VC.NET,VB,VB.NET,C#,C#.NET,Delphi,C++Builder 支持平台:Visual Studio 6.0/2002/2003/2005/2008/2010,C++ Builder 6.0/2009/2010,Delphi 7/2009,Visual Basic 6.0/2008,MyEclipse8.x 支持脚本:JavaScript,VBScript 支持服务器:Windows NT,Windows 2003,Windows XP,Windows Vista,Windows 7,Linux,Unix 支持浏览器:IE6,IE7,IE8,360安全浏览器,QQ浏览器,搜狐浏览器,Maxthon(遨游)浏览器1.X,Maxthon(傲游)浏览器2.x 支持文件大小:2G~8EB(1EB=102PB,1PB=1024TB,1TB=1024GB) 支持文件类型:任意类型 版权所有 2009-2012 武汉命运科技有限公司 保留所有权利 官方网站:http://www.ncmem.com/ 产品首页:http://www.ncmem.com/webplug/http-uploader3/index.aspx 在线演示:http://www.ncmem.com/products/http-uploader3/demo/index.html 产品介绍:http://www.cnblogs.com/xproer/archive/2012/05/29/2523757.html 开发文档-ASP:http://www.cnblogs.com/xproer/archive/2012/02/17/2355458.html 开发文档-PHP:http://www.cnblogs.com/xproer/archive/2012/02/17/2355467.html 开发文档-JSP:http://www.cnblogs.com/xproer/archive/2012/02/17/2355462.html 开发文档-ASP.NET:http://www.cnblogs.com/xproer/archive/2012/02/17/2355469.html 升级日志:http://www.cnblogs.com/xproer/archive/2012/02/17/2355449.html 示例下载:http://www.ncmem.com/download/HttpUploader4-demo.rar 文档下载:http://www.ncmem.com/download/HttpUploader4-doc.rar 镜像下载(DBank):cab安装包,开发文档 镜像下载(JSP):cab安装包,开发文档,ASP.NET-ACCESS示例,JSP-ACCESS示例(GB2312),JSP-ACCESS示例(UTF-8),JSP-Sql2005示例(UTF-8),JSP-MySQL示例(UTF-8) 镜像下载(PHP):MySQL示例(UTF-8) 问题反馈:http://www.ncmem.com/blog/guestbook.asp 数字证书补丁:http://www.ncmem.com/download/rootsupd.rar VC运行库:http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=29 联系信箱:[email protected] 联系QQ:1085617561 技术QQ:1269085759

2012-08-22

HTTP断点续传上传控件

现在大部分的网站使用的是标准HTML的上传方式来上传文件。一般情况下标准HTML方式在网页中只能上传4MB左右的文件,如果访问的用户比较多的时侯这种方式容易上传失败。虽然标准HTML上传方式开发起来比较简单,但是这种方式用户体验比较差,上传的文件大小受到限制,所以如果我们需要上传上百或者更大的上G的文件时,HTML标准上传方式是无法满足我们的需求的。 而另一方面,随着互联网行业的发展用户产生的新的需求也越来越多,同时对用户体验也提出了更高的要求,传统的HTML方式也越来越难已满足新的用户需求。现在大部分的用户有文件批量上传的需求,希望只通过点击一次鼠标就能够批量的上传多张图片,而不是一张张的选择文件上传,这样操作即浪费时间又非常烦琐。 近年来,由于数码和影视行业的迅猛发展刺激了用户对大文件的上传需求,现在越来越多的用户希望能够通过WEB的方式上传更大的文件,比如电影和图片。这些类型的文件通常都非常大,一般都在500MB以上,高清的影视文件至少在1G以上。这样的大文件是根本无法通过标准HTML方式来上传的。 不仅如此,由于国内网络环境比较特殊,有许多地区的网络不够稳定,在上传文件的过程中可能会发生断网的情况。如果用户正在上传一个1000MB的文件,已经上传了500MB,这时网络出现问题上传中止了。那么下一次用户需要要重新上传前面的500MB,而不是从500MB开始上传,这将浪费用户的许多时间。 新颖网络HTTP文件断点续传控件是专门用于解决HTTP大文件上传的需求而开发的产品。通过我们的HttpPartition模块用户能够非常方便的一次性选择超过200个的文件。而且我们升级了用户体验,用户现在不仅能够通过点击按钮来选择多个文件,还可以通过HttpDroper来拖拽文件甚至是文件夹。 现在我们能够轻松支持2G左右的大文件上传。为了减轻服务器的压力在HttpUploader模块中我们并不是一次上传2G的数据,而是将2G化分为小的数据块,每次向服务器上传约128KB左右的数据。同时在每次上传的数据中附带了文件大小,起始位置,文件MD5等信息。对于开发人员来说,有了这些信息,断点续传功能将会变的和普通的文件上传功能一样简单。 相信新颖网络HTTP断点续传控件能够帮助您赢利市场。 版权所有 2009-2012 北京新颖网络 保留所有权利 官方网站:http://www.ncmem.com/ 产品首页:http://www.ncmem.com/webplug/http-uploader3/index.aspx 在线演示:http://www.ncmem.com/products/http-uploader/demo/index.html 产品介绍:http://www.cnblogs.com/xproer/archive/2012/02/17/2355440.html 开发文档-ASP:http://www.cnblogs.com/xproer/archive/2012/02/17/2355458.html 开发文档-PHP:http://www.cnblogs.com/xproer/archive/2012/02/17/2355467.html 开发文档-JSP:http://www.cnblogs.com/xproer/archive/2012/02/17/2355462.html 开发文档-ASP.NET:http://www.cnblogs.com/xproer/archive/2012/02/17/2355469.html 升级日志:http://www.cnblogs.com/xproer/archive/2012/02/17/2355449.html 示例下载:http://www.ncmem.com/download/HttpUploader3-demo.rar 文档下载:http://www.ncmem.com/download/HttpUploader3-doc.rar 问题反馈:http://www.ncmem.com/blog/guestbook.asp Windows数字证书补丁:http://www.ncmem.com/download/rootsupd.rar Microsoft Visual C++ 2008 Redistributable Package (x86):http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=29

2012-02-21

MSYS安装与使用PDF教程

介绍、下载与安装方法 安装方法 这里我们采用了7z 格式压缩了所有需要解压缩的包,如果您打不开下载的文件,或者下载 后解压缩出现问题,可能是您用的winrar 版本过老,这些压缩包是没有问题的,强烈推荐 您使用免费、开放源代码、压缩比更高的7z文件压缩工具: http://www.7-zip.org/ 选择 1 首先,请先阅读本页最后的注意事项,然后安装下列基础包: · MSYS基础系统:http://msys-cn.googlecode.com/files/MSYS.7z 以下只选其一 · GCC 4 编译器:http://msys-cn.googlecode.com/files/mingw.7z · GCC 3 编译器:http://msys-cn.googlecode.com/files/mingw3.7z · 注意:GCC 3 与GCC 4 各有好处,3 对于大多数开源软件包兼容性最好,但是 mingw官方编译的时候没有添加iconv,也就是没有unicode支持,而这里的GCC 4

2012-01-01

FCKEditor2.6.4.1

FCKeditor文本编辑程序(共享软件)为用户提供在线的文档编辑服务,其具有与微软office软件一样的功能,与之不同的是FCKeditor不需要用户安装任何形式的客户端,FCKeditor程序非常精简但功能强大,因此而受到广大应用者的青睐。在博客日益兴盛的web2.0时代FCKeditor已经开始走向普通人的视线。 FCKeditor特性: 完成工具栏定制 皮肤支持 Plugins 支持 拼写检查程序 多语言支持以及自动用户语言侦查。 轻量级和快速 自动浏览器侦查和定制 支持多种编程语言 支持开发者安装和定做特定程序 对网友它是简单和容易使用! 完成工具栏定制 皮肤支持 Plugins 支持 拼写检查程序 多语言支持以及自动用户语言侦查。 轻量级和快速 自动浏览器侦查和定制 支持多种编程语言 支持开发者安装和定做特定程序 对网友它是简单和容易使用! 同时,FCKeditor支持以下编程语言环境: ASP.Net ASP ColdFusion PHP Java Active-FoxPro Lasso Perl Python 最新版本的FCKeditor(3.5)同时兼容绝大多数主流浏览器,包括: IE 5.5及以上版本 (windows), 火狐Firefox 1.0及以上版本, 遨游Mozilla 1.3及以上版本,网景7.0及以上版本。

2011-04-10

Google C++ 编码规范

为PDF文件增加了目录,便于快速浏览。 摘录一些不知道以及没有做到的: 1、防止多重包含(头文件的形式不能完美做到) 所有头文件都应该使用#define防止头文件被多重包含(multiple inclusion),命名格式当是:<PROJECT>_<PATH>_<FILE>_H_ 为保证唯一性,头文件的命名应基于其所在项目源代码树的全路径。例如,项目foo中的头文件foo/src/bar/baz.h按如下方式保护: #ifndef FOO_BAR_BAZ_H_ #define FOO_BAR_BAZ_H_ ... #endif // FOO_BAR_BAZ_H_ 2、能依赖声明就不要依赖定义 使用前置声明(forward declarations)尽量减少.h文件中#include的数量。 当一个头文件被包含的同时也引入了一项新的依赖(dependency),只要该头文件被修改,代码就要重新编译。如果你的头文件包含了其他头文件,这些头文件的任何改变也将导致那些包含了你的头文件的代码重新编译。因此,我们宁可尽量少包含头文件,尤其是那些包含在其他头文件中的。 使用前置声明可以显著减少需要包含的头文件数量。举例说明:头文件中用到类File,但不需要访问File的声明,则头文件中只需前置声明class File;无需#include "file/base/file.h"。 在头文件如何做到使用类Foo而无需访问类的定义? 1) 将数据成员类型声明为Foo *或Foo &; 2) 参数、返回值类型为Foo的函数只是声明(但不定义实现); 3) 静态数据成员的类型可以被声明为Foo,因为静态数据成员的定义在类定义之外。 另一方面,如果你的类是Foo的子类,或者含有类型为Foo的非静态数据成员,则必须为之包含头文件。 3、为什么要使用内联函数,又如何改进性能? 定义(Definition):当函数被声明为内联函数之后,编译器可能会将其内联展开,无需按通常的函数调用机制调用内联函数。 重要的是,虚函数和递归函数即使被声明为内联的也不一定就是内联函数。通常,递归函数不应该被声明为内联的(译者注:递归调用堆栈的展开并不像循环那么简单,比如递归层数在编译时可能是未知的,大多数编译器都不支持内联递归函数)。析构函数内联的主要原因是其定义在类的定义中,为了方便抑或是对其行为给出文档。 4. 函数参数顺序(Function Parameter Ordering) 定义函数时,参数顺序为:输入参数在前,输出参数在后。 C/C++ 函数参数分为输入参数和输出参数两种,有时输入参数也会输出(译者注:值被修改时)。输入参数一般传值或常数引用(const references),输出参数或输入/输出参数为非常数指针(non-const pointers)。对参数排序时,将所有输入参数置于输出参数之前。不要仅仅因为是新添加的参数,就将其置于最后,而应该依然置于输出参数之前。 5、头文件定义顺序 将包含次序标准化可增强可读性、避免隐藏依赖(hidden dependencies,译者注:隐藏依赖主要是指包含的文件中编译时),次序如下:对应该CPP的头文件、C库、C++库、其他库的.h、项目内的.h。 相同目录下头文件按字母序是不错的选择。 1. 命名空间(Namespaces) 在.cc文件中,提倡使用不具名的命名空间(unnamed namespaces,译者注:不具名的命名空间就像不具名的类一样,似乎被介绍的很少:-()。使用具名命名空间时,其名称可基于项目或路径名称,不要使用using指示符。 定义:命名空间将全局作用域细分为不同的、具名的作用域,可有效防止全局作用域的命名冲突。 优点:命名空间提供了(可嵌套)命名轴线(name axis,译者注:将命名分割在不同命名空间内),当然,类也提供了(可嵌套)的命名轴线(译者注:将命名分割在不同类的作用域内)。 举例来说,两个不同项目的全局作用域都有一个类Foo,这样在编译或运行时造成冲突。如果每个项目将代码置于不同命名空间中,project1::Foo和project2::Foo作为不同符号自然不会冲突。 缺点:命名空间具有迷惑性,因为它们和类一样提供了额外的(可嵌套的)命名轴线。在头文件中使用不具名的空间容易违背C++的唯一定义原则(One Definition Rule (ODR))。 1) 不具名命名空间(Unnamed Namespaces) 在.cc文件中,允许甚至提倡使用不具名命名空间,以避免运行时的命名冲突: namespace { // .cc 文件中 // 命名空间的内容无需缩进 enum { UNUSED, EOF, ERROR }; // 经常使用的符号 bool AtEof() { return pos_ == EOF; } // 使用本命名空间内的符号EOF } // namespace 然而,与特定类关联的文件作用域声明在该类中被声明为类型、静态数据成员或静态成员函数(什么意思?),而不是不具名命名空间的成员。像上文展示的那样,不具名命名空间结束时用注释// namespace标识。 2、函数重载 结论:如果你想重载一个函数,考虑让函数名包含参数信息,例如,使用AppendString()、AppendInt()而不是Append()。 3. 缺省参数(Default Arguments) 禁止使用缺省函数参数。 优点:经常用到一个函数带有大量缺省值,偶尔会重写一下这些值,缺省参数为很少涉及的例外情况提供了少定义一些函数的方便。 缺点:大家经常会通过查看现有代码确定如何使用API,缺省参数使得复制粘贴以前的代码难以呈现所有参数,当缺省参数不适用于新代码时可能导致重大问题。 结论:所有参数必须明确指定,强制程序员考虑API和传入的各参数值,避免使用可能不为程序员所知的缺省参数。 8. 类型转换(Casting) 使用static_cast<>()等C++的类型转换,不要使用int y = (int)x或int y = int(x);。 定义:C++引入了有别于C的不同类型的类型转换操作。 优点:C语言的类型转换问题在于操作比较含糊:有时是在做强制转换(如(int)3.5),有时是在做类型转换(如(int)"hello")。另外,C++的类型转换查找更容易、更醒目。 缺点:语法比较恶心(nasty)。 结论:使用C++风格而不要使用C风格类型转换。 1) static_cast:和C风格转换相似可做值的强制转换,或指针的父类到子类的明确的向上转换; 2) const_cast:移除const属性; 3) reinterpret_cast:指针类型和整型或其他指针间不安全的相互转换,仅在你对所做一切了然于心时使用; 4) dynamic_cast:除测试外不要使用,除单元测试外,如果你需要在运行时确定类型信息,说明设计有缺陷(参考RTTI)。 10. 前置自增和自减(Preincrement and Predecrement) 对于迭代器和其他模板对象使用前缀形式(++i)的自增、自减运算符。 定义:对于变量在自增(++i或i++)或自减(--i或i--)后表达式的值又没有没用到的情况下,需要确定到底是使用前置还是后置的自增自减。 优点:不考虑返回值的话,前置自增(++i)通常要比后置自增(i++)效率更高,因为后置的自增自减需要对表达式的值i进行一次拷贝,如果i是迭代器或其他非数值类型,拷贝的代价是比较大的。既然两种自增方式动作一样(译者注,不考虑表达式的值,相信你知道我在说什么),为什么不直接使用前置自增呢? 缺点:C语言中,当表达式的值没有使用时,传统的做法是使用后置自增,特别是在for循环中,有些人觉得后置自增更加易懂,因为这很像自然语言,主语(i)在谓语动词(++)前。 结论:对简单数值(非对象)来说,两种都无所谓,对迭代器和模板类型来说,要使用前置自增(自减)。 16. sizeof(sizeof) 尽可能用sizeof(varname)代替sizeof(type)。 使用sizeof(varname)是因为当变量类型改变时代码自动同步,有些情况下sizeof(type)或许有意义,还是要尽量避免,如果变量类型改变的话不能同步。 8. TODO注释(TODO Comments) 对那些临时的、短期的解决方案,或已经够好但并不完美的代码使用TODO注释。 这样的注释要使用全大写的字符串TODO,后面括号(parentheses)里加上你的大名、邮件地址等,还可以加上冒号(colon):目的是可以根据统一的TODO格式进行查找: // TODO([email protected]): Use a "*" here for concatenation operator. // TODO(Zeke) change this to use relations. 如果加上是为了在“将来某一天做某事”,可以加上一个特定的时间("Fix by November 2005")或事件("Remove this code when all clients can handle XML responses.")。 TODO很不错,有时候,注释确实是为了标记一些未完成的或完成的不尽如人意的地方,这样一搜索,就知道还有哪些活要干,日志都省了。

2010-09-17

XproerIM For VS2008

Visual Studio 2008 平台下人XproerIM。 没有做太多的改变。 目前正在做一些新的升级计划,预计未来三个月中将会有大的更新。 项目论坛:http://www.5gzl.com/bbs/

2010-09-08

新颖网络文件上传插件For Visual Studio 6.0示例

新颖网络上传插件(StorageWebPlug)是一个支持超大文件(100GB)上传的COM控件, 具备断点续传,文件MD5验证,大大提高上传效率、节省带宽,有详细的上传进度显示,支持多种脚本语言,免费提供JavaScript SDK包,C++ SDK包。 官方网站:http://www.ncmem.com 官方博客:http://www.cnblogs.com/xproer 产品论坛:http://www.5gzl.com/bbs/ 联系邮箱:[email protected]

2010-09-05

仿QQ的VC6版XproerIM

交流论坛:http://ncmem.vip11.adminftp.org/index.php 相信开发人员对即时通迅软件(也称IM)已经非常熟悉了,有许多朋友更是梦想着开发一款属于自已的通迅软件,但是由于即时通迅软件涉及的领域比较广,比如UI,数据库,网络通迅等,所以工程量比较大,一个人开发难已顾及这么多,很可能最终因为维护的困难而放弃。 不仅通迅软件涉及的范围广,其它的软件也一样,影响项目的最大问题在于系统的构架设计不好,这个问题在项目的开始阶段还不会体现出来,往往随着项目功能的增加,代码量的暴涨那么系统就变得难已管理了,也许仅仅只是增加一个小的功能也会使整个系统伤筋动骨。 不过现在我非常高兴的告诉大家一个好消息,如果你现在仍然没有放弃打造自已的即时通迅软件那么你可以试试开源的即时通迅项目XproerIM。XproerIM是一款模仿QQ的即时通迅开源项目,目的在于打造国内最大的且代码质量最高的开源项目。最新版本的XproerIM是使用VC6.0编写的,暂时还没有服务端,只是一个客户端的程序。虽然没有服务端但是这并不影响XproerIM的开源进程,因为XproerIM提供了丰富和方便的扩展接口来供开发人员自定义开发,其源码完全开放,所以你可以完全根据自身的网络环境情况来定制开发服务器,服务器可以基于Linux或者Windows。 在客户端界面方面可以说XproerIM是一款集大成者,他集成了许多网上优秀的开源项目,例如菜单的项目使用CMenuXP(http://www.codeproject.com/KB/menus/menuxp2.aspx),XML文件处理使用了TinyXML,WebService方面使用gSOAP,当然也少不了一些大牛的代码。大牛的代码是从CSDN和其它网站中收集并整理的,XproerIM客户端的开源也是受这些大牛奉献精神的影响。 为了使一些组件或模块尽可能的通用我们花了一些时间来设计一个纯面向对象的类库(XproerIM Framework)。一方面是为了更好的管理整个系统和促进各模块之前的协作能力,另一方面是为XproerIM团队开发带来更多的便利。XproerIM Framework不仅仅只是提供更丰富的功能,更是在代码质量,风格,命名规范上面狠下功夫,这种全方位立体式的Framework会让XproerIM的开发人员感觉到用C++编写代码也是一种享受,毫不夸张的说XproerIM Framework的代码质量最终将会达到商用级别! 由XproerIM开发团队倾心打造的XproerIM Framework 类库是一个由 XproerIM Framework SDK 中包含的类、接口和值类型组成的库。该库提供对文件,网络,数据库,系统功能的访问,是建立 Windows,WebService应用程序、组件和控件的基础。 XproerIM Framework 是XproerIM的核心组件。XproerIM Framework 旨在实现下列目标:  从底层上简化C++编码,让开发人员更多的关注设计和业务逻辑。同时由设计和开发小组协作来保证代码质量和性能。  提供一个一致的面向对象的编程环境,这种环境规定了命名规范,编码风格以及代码注释等要求。就算是一个新加盟的成员也能轻松胜任。  使所有成员的经验在面对类型大不相同的应用程序(如基于 Windows 的应用程序和基于 Web 的应用程序)时保持一致。即便是一个对WebService不熟悉的成员也能通过类库轻松访问WebService并编写业务逻辑方面的代码。同是为第三方WebService系统提供强劲的支持,使第三方WebService能够与XproerIM无缝的集成。 由于XproerIM客户端极其优化的构架所以也是众多新手学习的宝典。同时XproerIM客户端社区也正在极积的建设中,我们不仅仅提供源代码,而且还会提供详细的开发文档,以及各种技术文档来帮助开发人员了解整套系统。 最后虽然XproerIM有这么多的优点,但是我仍然需要坦白的说明一点目前XproerIM还并不完善,里面的一些功能,控件设计的并不完善。对于开发人员来讲我觉得即然XproerIM提供了这么优秀的构架那么我们可以将完善XproerIM客户端的过程看作是对自已能力提高的一种练习。同时XproerIM开发团队会尽最大的努力使这一过程变成一种乐趣!

2010-06-30

ActiveX多文件上传插件

新颖网络上传插件(StorageWebPlug)是一个支持超大文件(2GB,可扩展)上传的COM控件, 具备断点续传,文件MD5验证,大大提高上传效率、节省带宽,有详细的上传进度显示,支持多种脚本语言,欢迎下载体验。免费提供JavaScript SDK包。 产品特点: 1、文件上传使用增强的FTP协议,用户使用浏览器就可以上传超大文件到服务器(支持上传超过1G的文件)。 2、支持断点续传,系统智能续传未上传的文件,续传操作更简单,更方便,更快捷。 3、支持文件批量上传, 一次可以上传多个文件. 上传时有详细的状态显示(包括单个文件进度,整体进度,传输速率,剩余时间等)。 4、新颖网络免费提供JavaScript SDK包。通过新颖网络提供的封装好的JavaScript类库用户可以快速的与现有系统整合。 5、StorageWebPlug提供完善的接口和帮助文档,开发文档。开发人员可以动态设置上传保存路径, 设置允许扩展名, 允许最大大小等,可自定义强。 6、支持各种代理(HTTP, Socket4, Socket5等)。 7、组件采用多线程机制来保证上传效率。 8、支持批量文件上传, 用户可以一次性上传批量文件. 客户端可以绑定HTML表单变量, 服务端并可以接收表单变量 9、服务端文件保存路径可以随意指定,服务端文件保存路径可以灵活变化。保存路径支持网络路径。 10、为提高安全性,服务端组件可以指定用户权限 11、可以限制上传单个文件大小, 控制上传带宽上限, 允许文件扩展名, 拒绝文件扩展名等 12、上传数据时会根据网络状况来控制数据包大小, 避免网络堵塞 13、控件采用ATL编写, cab包只有59KB, 用56k的modem下载不会超过12秒 14、服务端支持Windows 2000 Server/Windows 2003 Server/Windows NT/Windows XP/Unix/Linux等操作系统 15、通过新颖网络业界领先的设计水平打造的操作界面可以帮助您的系统和产品获得更高的品质。 产品介绍:http://www.ncmem.com/service_storagewebplug.aspx 下载地址:http://www.ncmem.com/download.aspx

2010-04-12

Microsoft MSXML SDK

Microsoft XML Core Services (MSXML) 4.0 MSXML 4.0 SDK What is MSXML? Microsoft® XML Core Services (MSXML) 4.0 allows customers to build high-performance XML-based applications that provide a high degree of interoperability with other applications that adhere to the XML 1.0 standard. Among the core services MSXML 4.0 provides is developer support for the following: The Document Object Model (DOM), a standard library of application programming interfaces (APIs) for accessing XML documents. The XML Schema definition language (XSD), a current W3C standard for using XML to create XML Schemas. XML Schemas can be used to validate other XML documents. The Schema Object Model (SOM), an additional set of APIs for accessing XML Schema documents programmatically. Extensible Stylesheet Language Transformations (XSLT) 1.0, a current W3C XML style sheet language standard. XSLT is recommended for transforming XML documents. The XML Path Language (XPath) 1.0, a current W3C XML standard used by XSLT and other XML programming vocabularies to query and filter data stored in XML documents. The Simple API for XML (SAX), a programmatic alternative to DOM-based processing. What's the MSXML SDK? The MSXML 4.0 Software Development Kit (SDK) provides conceptual and reference information for developers using MSXML. The SDK consists of Developer's Guides and Reference sections for each of MSXML's core technologies. For more information and links to each of these sections, see Roadmap to the MSXML 4.0 SDK. In addition to the sections for each core technology, the SDK contains the current overview section. This section contains the following topics that provide general information about MSXML and this SDK. Roadmap to the MSXML 4.0 SDK What's New Installing and Registering the MSXML 4.0 SDK Redistributing MSXML MSXML API History GUID and ProgID Information Dependencies in MSXML 4.0 Tips for Converting Samples to VBScript Using Language Filtering Sample XML File (books.xml) Copyright and Legal Information What's New? This release of the MSXML SDK provides the following new sections. Program with DOM in C/C++ Program with DOM in C/C++ Using Smart Pointer Class Wrappers Program with DOM in Visual Basic Program with DOM in JScript XDR Schema Developer's Guide DTD Developer's Guide DTD Reference For information about new product features in MSXML 4.0, see What's New. Tips and FAQ The following sections, located throughout the SDK, provide tips, frequently asked questions, and best practice information. Tips for Converting Samples to VBScript FAQ About Schemas FAQ About the SOM FAQ About XSLT FAQ About SAX2 Best Practices for Securing MSXML Code Support For free help with MSXML issues, try posting to the MSXML public newsgroup. This newsgroup is monitored by Microsoft Product Support Services (PSS) engineers who cover MSXML, and by other experienced MSXML developers. Further information about support options can be found on the Microsoft Help and Support Web site. Other Resources Microsoft Data Access Components (MDAC) | Web Services, part of the Platform SDK documentation

2010-03-04

分布式系统设计.pdf

第1章概论 显然,未来对计算速度、系统可靠性和成本实效性的要求必将促使发展另外的计算机模型 来取代传统的冯·诺依曼型结构的计算机。随着计算机网络的出现,一个新的梦想成为可能 —分布式计算。当用户需要完成任何任务时,分布式计算提供对尽可能多的计算机能力和数 据的透明访问,同时实现高性能与高可靠性的目标。在过去的1 0年里,人们对分布式计算系统 的兴趣迅猛发展。有关分布式计算的主题是多种多样的,许多研究人员正在研究关于分布式硬 件结构和分布式软件设计的各方面问题以开发利用其潜在的并行性和容错性。在这一章里,我 们将考虑一些基本概念以及与分布式计算相关的一些问题,并列出了本书所覆盖的主题。 1.1 推动因素 计算机技术的发展可以通过使用计算机的不同方式来描述。在5 0年代,计算机是串行处理 机,一次运行一个作业直至完成。这些处理机通过一个操作员从控制台操纵,而对于普通用户 则是不可访问的。在6 0年代,需求相似的作业作为一个组以批处理的方式通过计算机运行以减 少计算机的空闲时间。同一时期还提出了其他一些技术,如利用缓冲、假脱机和多道程序等的 脱机处理。7 0年代产生了分时系统,不仅作为提高计算机利用率的手段,也使用户离计算机更 近了。分时是迈向分布式系统的第一步:用户可以在不同的地点共享并访问资源。8 0年代是个 人计算的1 0年:人们有了他们自己专用的机器。由于基于微处理器的系统所提供的出色的性能/ 价格比和网络技术的稳步提高, 9 0年代是分布式系统的1 0年。 分布式系统可以有不同的物理组成:一组通过通信网络互连的个人计算机,一系列不仅共 享文件系统和数据库系统而且共享C P U周期的工作站(而且在大部分情况下本地进程比远程进 程有更高的优先级,其中一个进程就是一个运行中的程序),一个处理机池(其中终端不隶属于 任何一个处理机,而且不论本地进程还是远程进程,所有资源得以真正的共享)。 分布式系统是无缝的,也就是说网络功能单元间的接口很大程度上对用户不可见。分布式计 算的思想还被应用在数据库系统[16、38、49],文件系统[4、24、33、43、54],操作系统[2、39、46]和通用环境[19、32、35]。 另一种表示同样思想的说法是用户把系统看成一个虚拟的单处理机而不是不

2010-02-02

人工智能.PDF(chinapub)

我认为,理解智能包括理解:知识如何获取、表达和存储;智能行为如何产生和学习;动 机、情感和优先权如何发展和运用;传感器信号如何转换成各种符号;怎样利用各种符号执行 逻辑运算、对过去进行推理及对未来进行规划;智能机制如何产生幻觉、信念、希望、畏惧、 梦幻甚至善良和爱情等现象。我相信,对上述内容有一个根本的理解将会成为与拥有原子物理、 相对论和分子遗传学等级相当的科学成就。 — James Albus“答复H e n r y He x m o o r”,摘自U R L: h t t p : / / t o m m y. j s c . n a s a . g o v / e r / e r 6 / m r l / p a p e r s / s y m p o s i u m / a l b u s . t x t 1 9 9 5年2月1 3日

2010-02-02

SHTML教程TXT格式

什么是 SHTML 使用SSI(Server Side Include)的html文件扩展名,SSI(Server Side Include),通常称为“服务器端嵌入”或者叫“服务器端包含”,是一种类似于ASP的基于服务器的网页制作技术。 SSI工作原理: 将内容发送到浏览器之前,可以使用“服务器端包含 (SSI)”指令将文本、图形或应用程序信息包含到网页中。例如,可以使用 SSI 包含时间/日期戳、版权声明或供客户填写并返回的表单。对于在多个文件中重复出现的文本或图形,使用包含文件是一种简便的方法。将内容存入一个包含文件中即可,而不必将内容输入所有文件。通过一个非常简单的语句即可调用包含文件,此语句指示 Web 服务器将内容插入适当网页。而且,使用包含文件时,对内容的所有更改只需在一个地方就能完成。 因为包含 SSI 指令的文件要求特殊处理,所以必须为所有 SSI 文件赋予 SSI 文件扩展名。默认扩展名是 .stm、.shtm 和 .shtml。 Web 服务器在处理网页的同时处理 SSI 指令。当 Web 服务器遇到 SSI 指令时,直接将包含文件的内容插入 HTML 网页。如果“包含文件”中包含 SSI 指令,则同时插入此文件。除了用于包含文件的基本指令之外,还可以使用 SSI 指令插入文件的相关信息(如文件的大小)或者运行应用程序或 shell 命令。 网站维护常常碰到的一个问题是,网站的结构已经固定,却为了更新一点内容而不得不重做一大批网页。SSI提供了一种简单、有效的方法来解决这一问题,它将一个网站的基本结构放在几个简单的HTML文件中(模板),以后我们要做的只是将文本传到服务器,让程序按照模板自动生成网页,从而使管理大型网站变得容易。 所以,利用SHTML格式的页面目的和 ASP 差不多,但是因为是 API 所以运转速度更快,效率更高,比ASP快,比HTML慢,但由于可以使用服务器端包含,因此使页面更新容易(特别是批量更新banner,版权等),想象一下吧,你有一段 HTML,要在中间穿插一些特殊的服务端脚本,比如插入其他 HTML 段落,你选择 ASP 来完成这个任务,但是如果任务更繁重,需要更多的时间,比如 5 s,这个时候你不用 ASP 而用 SHTML,或许处理时间就只用 4s 了。 SSI有什么用? 之所以要扯到 SSI,是因爲 Shtml - Server-Parsed HTML 的首字母缩略词。包含有嵌入式服务器方包含命令的 HTML 文本。在被传送给浏览器之前,服务器会对 SHTML 文档进行完全地读取、分析以及修改。shtml和asp 有一些相似,以shtml命名的文件里,使用了ssi的一些指令,就像asp中的指令,你可以在SHTML文件中写入SSI指令,当客户端访问这些shtml文件时,服务器端会把这些SHTML文件进行读取和解释,把SHTML文件中包含的SSI指令解释出来比如:你可以在SHTML文件中用SSI指令引用其他的html文件(#include ),服务器传送给客户端的文件,是已经解释的SHTML不会有SSI指令。它实现了HTML所没有的功能,就是可以实现了动态的SHTML,可以说是HTML的一种进化吧。像新浪的新闻系统就是这样的,新闻内容是固定的但它上面的广告和菜单等就是用#include引用进来的。 目前,主要有以下几种用用途: 1.显示服务器端环境变量<#echo> 2.将文本内容直接插入到文档中<#include> 3.显示WEB文档相关信息<#flastmod #fsize> (如文件制作日期/大小等) 4.直接执行服务器上的各种程序<#exec>(如CGI或其他可执行程序) 5.设置SSI信息显示格式<#config>(如文件制作日期/大小显示方式) 高级SSI<XSSI>可设置变量使用if条件语句。 使用SSI SSI是为WEB服务器提供的一套命令,这些命令只要直接嵌入到HTML文档的注释内容之中即可。如: <!--#include file="info.htm"--> 就是一条SSI指令,其作用是将"info.htm"的内容拷贝到当前的页面中,当访问者来浏览时,会看到其它HTML文档一样显示info.htm其中的内容。其它的SSI指令使用形式基本同刚才的举例差不多,可见SSI使用只是插入一点代码而已,使用形式非常简单。当然,如果WEB服务器不支持SSI,它就会只不过将它当作注释信息,直接跳过其中的内容;浏览器也会忽略这些信息。 如何在我的WEB服务器上配置SSI功能? 在一些WEB服务器上(如IIS 4.0/SAMBAR 4.2),包含 #include 指令的文件必须使用已被映射到 SSI 解释程序的扩展名;否则,Web 服务器将不会处理该SSI指令;默认情况下,扩展名 .stm、.shtm 和 .shtml 被映射到解释程序(Ssinc.dll)。 Apache则是根据你的设置情况而定,修改srm.conf如: AddType text/x-server-parsed-html .shtml 将只对.shtml扩展名的文件解析SSI指令 AddType text/x-server-parsed-html .html 将对所有HTML文档解析SSI指令 Netscape WEB服务器直接使用 Administration Server(管理服务器)可打开SSI功能。 Website 使用 Server Admin 程序中的 Mapping 标签,扩展名添加内容类型为:wwwserver/html-ssi Cern 服务器不支持SSI,可用SSI诈骗法,到 http://sw.cse.bris.ac.uk/WebTools/fakessi.html 上下载一个PERL脚本,即可使你的CERN服务器使用一些SSI指令。(不支持exec指令。) SSI指令基本格式 程序代码: <!-- 指令名称="指令参数"> 示例: <!--#include file="info.htm"--> 说明: 1.<!-- -->是HTML语法中表示注释,当WEB服务器不支持SSI时,会忽略这些信息。 2.#include 为SSI指令之一。 3.file 为include的参数, info.htm为参数值,在本指令中指将要包含的文档名。 注意: 1.<!--与#号间无空格,只有SSI指令与参数间存在空格。 2.上面的标点="",一个也不能少。 3.SSI指令是大小写敏感的,因此参数必须是小写才会起作用。 SSI指令使用详解 #echo 示范 作用:将环境变量插入到页面中。 语法: <!--#echo var="变量名称"--> 示例: <!--#echo var="DOCUMENT_NAME"--> 本文档名称 <!--#echo var="DATE_LOCAL"--> 现在时间 <!--#echo var="REMOTE_ADDR"--> 你的IP地址 #include 示范 作用:将文本文件的内容直接插入到文档页面中。 语法: <!--#include file="文件名称"--> <!--#include virtual="文件名称"--> file 文件名是一个相对路径,该路径相对于使用 #include 指令的文档所在的目录。被包含文件可以在同一级目录或其子目录中,但不能在上一级目录中。如表示当前目录下的的nav_head.htm文档,则为file="nav_head.htm"。 virtual 文件名是 Web 站点上的虚拟目录的完整路径。如表示相对于服务器文档根目录下hoyi目录下的nav_head.htm文件;则为file="/hoyi/nav_head.htm" 参数: file 指定包含文件相对于本文档的位置 virtual 指定相对于服务器文档根目录的位置 注意: 1.文件名称必须带有扩展名。 2.被包含的文件可以具有任何文件扩展名,我觉得直接使用htm扩展名最方便,微软公司推荐使用 .inc 扩展名(这就看你的爱好了)。 示例: <!--#include file="nav_head.htm"--> 将头文件插入到当前页面 <!--#include file="nav_foot.htm"--> 将尾文件插入到当前页面 #flastmod 和 #fsize 示范 作用: #flastmod 文件最近更新日期 #fsize 文件的长度 语法: <!--#flastmod file="文件名称"--> <!--#fsize file="文件名称"--> 参数: file 指定包含文件相对于本文档的位置 如 info.txt 表示当前目录下的的info.txt文档 virtual 指定相对于服务器文档根目录的位置 如 /hoyi/info.txt 表示 注意:文件名称必须带有扩展名。 示例: <!--#flastmod file="news.htm"--> 将当前目录下news.htm文件的最近更新日期插插入到当前页面 <!--#fsize file="news.htm"--> 将当前目录下news.htm的文件大小入到当前页面 #exec 示范 作用:将某一外部程序的输出插入到页面中。可插入CGI程序或者是常规应用程序的输入,这取决于使用的参数是cmd还是cgi。 语法: <!--#exec cmd="文件名称"--> <!--#exec cgi="文件名称"--> 参数: cmd 常规应用程序 cgi CGI脚本程序 示例: <!--#exec cmd="cat /etc/passwd"--> 将会显示密码文件 <!--#exec cmd="dir /b"--> 将会显示当前目录下文件列表 <!--#exec cgi="/cgi-bin/gb.cgi"--> 将会执行CGI程序gb.cgi。 <!--#exec cgi="/cgi-bin/access_log.cgi"--> 将会执行CGI程序access_log.cgi。 注意:从上面的示例可以看出,这个指令相当方便,但是也存在安全问题。 禁止方法: 1.Apache,将access.conf中的"Options Includes ExecCGI"这行代码删除; 2.在IIS中,要禁用 #exec 命令,可修改 SSIExecDisable 元数据库; #config 作用: 指定返回给客户端浏览器的错误信息、日期和文件大小的格式。 语法: <!--#config errmsg="自定义错误信息"--> <!--#config sizefmt="显示单位"--> <!--#config timefmt="显示格式"--> 参数: errmsg 自定义SSI执行错误信息,可以为任何你喜欢的方式。 sizefmt 文件大小显示方式,默认为字节方式("bytes")可以改为千字节方式("abbrev") timefmt 时间显示方式,最灵活的配置属性。 示例:显示一个不存在文件的大小 <!--#config errmsg="服务器执行错误,请联系管理员 [email protected],谢谢!"--> <!--#fsize file="不存在的文件.htm"--> 以千字节方式显示文件大小 语法: <!--#config sizefmt="abbrev"--> <!--#fsizefile="news.htm"--> 以特定的时间格式显示时间 <!--#config timefmt="%Y年/%m月%d日 星期%W 北京时间%H:%M:%s,%Y年已过去了%j天 今天是%Y年的第%U个星期"--> <!--#echo var="DATE_LOCAL"--> 显示今天是星期几,几月,时区 <!--#config timefmt="今天%A, %B ,服务器时区是 %z,是"--> <!--#echo var="DATE_LOCAL"--> XSSI XSSI(Extended SSI)是一组高级SSI指令,内置于Apache 1.2或更高版本的mod-include模块之中。其中可利用的的指令有: #printenv #set #if #printenv 作用: 显示当前存在于WEB服务器环境中的所有环境变量。 语法: <!--#printenv--> #set 作用:可给变量赋值,以用于后面的if语句。 语法: <!--#set var="变量名" value="变量值"--> 示例: <!--#set var="color" value="红色"--> #if 作用:创建可以改变数据的页面,这些数据根据使用if语句时计算的要求予以显示。 语法: <!--#if expr="$变量名=\"变量值A\""--> 显示内容 <!--#elif expr="$变量名=\"变量值B\""--> 显示内容 <!--#else--> 显示内容 <!--#endif"--> 示例: <!--#if expr="$SERVER_NAME=\"www.baidu.com\""--> 欢迎光临 http://www.baidu.com <!--#elif expr="$SERVER_NAME=\"www.google.com\"" --> 欢迎光临 http://www.google.com <!--#else--> 欢迎光临 Afly's Blog! <!--#endif"--> 注意:用于前面指令中的反斜杠,是用来代换内部的引号,以便它们不会被解释为结束表达式。不可省略。 1、Config 命令 Config 命令主要用于修改SSI的默认设置。其中: Errmsg:设置默认错误信息。为了能够正常的返回用户设定的错误信息,在HTML文件中Errmsg参数必须被放置在其它SSI命令的前面,否则客户端只能显示默认的错误信息,而不是由用户设定的自定义信息。 <!--#config errmsg="Error! Please email [email protected]" --> Timefmt:定义日期和时间的使用格式。Timefmt参数必须在echo命令之前使用。 <!--#config timefmt="%A, %B %d, %Y"--> <!--#echo var="LAST_MODIFIED" --> 显示结果为: Wednesday, April 12, 2000 也许用户对上例中所使用的%A %B %d感到很陌生,下面我们就以表格的形式总结一下SSI中较为常用的一些日期和时间格式。 Sizefmt:决定文件大小是以字节、千字节还是兆字节为单位表示。如果以字节为单位,参数值为"bytes";对于千字节和兆字节可以使用缩写形式。同样,sizefmt参数必须放在fsize命令的前面才能使用。 <!--#config sizefmt="bytes" --> <!--#fsize file="index.html" --> 2、Include 命令 Include 命令可以把其它文档中的文字或图片插入到当前被解析的文档中,这是整个SSI的关键所在。通过Include命令只需要改动一个文件就可以瞬间更新整个站点! Include 命令具有两个不同的参数: Virtual:给出到服务器端某个文档的虚拟路径。 File:给出到当前目录的相对路径,其中不能使用"../",也不能使用绝对路径。 <!--#include virtual="/includes/header.html" --> <!--#include file="header.html" --> 这就要求每一个目录中都包含一个header.html文件。 3、Echo 命令 Echo 命令可以显示以下各环境变量: DOCUMENT_NAME:显示当前文档的名称。 DOCUMENT_URI:显示当前文档的虚拟路径。例如: <!--#echo var="DOCUMENT_NAME" --> <!--#echo var="DOCUMENT_URI" --> 随着网站的不断发展,那些越来越长的URL地址肯定会让人头疼。如果使用SSI,一切就会迎刃而解。因为我们可以把网站的域名和SSI命令结合在一起显示完整的URL,即: http://YourDomain<!--#echo var="DOCUMENT_URI" --> QUERY_STRING_UNESCAPED:显示未经转义处理的由客户端发送的查询字串,其中所有的特殊字符前面都有转义符"\"。例如: <!--#echo var="QUERY_STRING_UNESCAPED" --> DATE_LOCAL:显示服务器设定时区的日期和时间。用户可以结合config命令的timefmt参数,定制输出信息。例如: <!--#config timefmt="%A, the %d of %B, in the year %Y" --> <!--#echo var="DATE_LOCAL" --> 显示结果为: Saturday, the 15 of April, in the year 2000 DATE_GMT:功能与DATE_LOCAL一样,只不过返回的是以格林尼治标准时间为基准的日期。例如: <!--#echo var="DATE_GMT" --> LAST_MODIFIED:显示当前文档的最后更新时间。同样,这是SSI中非常实用的一个功能,只要在HTML文档中加入以下这行简单的文字,就可以在页面上动态的显示更新时间。 <!--#echo var="LAST_MODIFIED" --> CGI环境变量 除了SSI环境变量之外,echo命令还可以显示以下CGI环境变量: SERVER_SOFTWARE:显示服务器软件的名称和版本。例如: <!--#echo var="SERVER_SOFTWARE" --> SERVER_NAME: 显示服务器的主机名称,DNS别名或IP地址。例如: <!--#echo var="SERVER_NAME" --> SERVER_PROTOCOL:显示客户端请求所使用的协议名称和版本,如HTTP/1.0。例如: <!--#echo var="SERVER_PROTOCOL" --> SERVER_PORT:显示服务器的响应端口。例如: <!--#echo var="SERVER_PORT" --> REQUEST_METHOD:显示客户端的文档请求方法,包括GET, HEAD, 和POST。例如: <!--#echo var="REQUEST_METHOD" --> REMOTE_HOST:显示发出请求信息的客户端主机名称。 <!--#echo var="REMOTE_HOST" --> REMOTE_ADDR:显示发出请求信息的客户端IP地址。 <!--#echo var="REMOTE_ADDR" --> AUTH_TYPE:显示用户身份的验证方法。 <!--#echo var="AUTH_TYPE" --> REMOTE_USER:显示访问受保护页面的用户所使用的帐号名称。 <!--#echo var="REMOTE_USER" --> 4、Fsize:显示指定文件的大小,可以结合config命令的sizefmt参数定制输出格式。 <!--#fsize file="index_working.html" --> 5、Flastmod:显示指定文件的最后修改日期,可以结合config 命令的timefmt参数控制输出格式。 <!--#config timefmt="%A, the %d of %B, in the year %Y" --> <!--#flastmod file="file.html" --> 这里,我们可以利用flastmod参数显示出一个页面上所有链接页面的更新日期。方法如下: <!--#config timefmt=" %B %d, %Y" --> <A HREF="/directory/file.html">File</A> <!--#flastmod virtual="/directory/file.html" --> <A HREF="/another_directory/another_file.html">Another File</A> <!--#flastmod virtual="/another_directory/another_file.html" --> 显示结果为: File April 19, 2000 Another File January 08, 2000 6、Exec Exec命令可以执行CGI脚本或者shell命令。使用方法如下: Cmd:使用/bin/sh执行指定的字串。如果SSI使用了IncludesNOEXEC选项,则该命令将被屏蔽。 Cgi:可以用来执行CGI脚本。例如,下面这个例子中使用服务端cgi-bin目录下的counter.pl脚本程序在每个页面放置一个计数器: <!--#exec cgi="/cgi-bin/counter.pl" --> 关于SHTML和HTML的区别 让我们先来看看SHTML和HTML的区别,如果用一句话来解释就是:SHTML 不是HTML而是一种服务器 API,shtml是服务器动态产成的html. 虽然两者都是超文本格式,但shtml是一种用于SSI技术的文件。也就是Server Side Include--SSI 服务器端包含指令。如果Web Server有SSI功能的话,大多数(尤其是基于Unix平台)的WEB服务器,如Netscape Enterprise Server等均支持SSI命令。

2010-01-07

SQL示例大全 V1.0

更新如下: (1)增加单表高性能分页存储过程(由微软MVP编写) (2)增加多表高性能分页存储过程 (3)增加CASE示例 (4)增加验证表是否存在示例

2009-11-30

微软内部资料-SQL性能优化5

Contents Overview 1 Lesson 1: Index Concepts 3 Lesson 2: Concepts – Statistics 29 Lesson 3: Concepts – Query Optimization 37 Lesson 4: Information Collection and Analysis 61 Lesson 5: Formulating and Implementing Resolution 75 Module 6: Troubleshooting Query Performance Overview At the end of this module, you will be able to:  Describe the different types of indexes and how indexes can be used to improve performance.  Describe what statistics are used for and how they can help in optimizing query performance.  Describe how queries are optimized.  Analyze the information collected from various tools.  Formulate resolution to query performance problems. Lesson 1: Index Concepts Indexes are the most useful tool for improving query performance. Without a useful index, Microsoft® SQL Server™ must search every row on every page in table to find the rows to return. With a multitable query, SQL Server must sometimes search a table multiple times so each page is scanned much more than once. Having useful indexes speeds up finding individual rows in a table, as well as finding the matching rows needed to join two tables. What You Will Learn After completing this lesson, you will be able to:  Understand the structure of SQL Server indexes.  Describe how SQL Server uses indexes to find rows.  Describe how fillfactor can impact the performance of data retrieval and insertion.  Describe the different types of fragmentation that can occur within an index. Recommended Reading  Chapter 8: “Indexes”, Inside SQL Server 2000 by Kalen Delaney  Chapter 11: “Batches, Stored Procedures and Functions”, Inside SQL Server 2000 by Kalen Delaney Finding Rows without Indexes With No Indexes, A Table Must Be Scanned SQL Server keeps track of which pages belong to a table or index by using IAM pages. If there is no clustered index, there is a sysindexes row for the table with an indid value of 0, and that row will keep track of the address of the first IAM for the table. The IAM is a giant bitmap, and every 1 bit indicates that the corresponding extent belongs to the table. The IAM allows SQL Server to do efficient prefetching of the table’s extents, but every row still must be examined. General Index Structure All SQL Server Indexes Are Organized As B-Trees Indexes in SQL Server store their information using standard B-trees. A B-tree provides fast access to data by searching on a key value of the index. B-trees cluster records with similar keys. The B stands for balanced, and balancing the tree is a core feature of a B-tree’s usefulness. The trees are managed, and branches are grafted as necessary, so that navigating down the tree to find a value and locate a specific record takes only a few page accesses. Because the trees are balanced, finding any record requires about the same amount of resources, and retrieval speed is consistent because the index has the same depth throughout. Clustered and Nonclustered Indexes Both Index Types Have Many Common Features An index consists of a tree with a root from which the navigation begins, possible intermediate index levels, and bottom-level leaf pages. You use the index to find the correct leaf page. The number of levels in an index will vary depending on the number of rows in the table and the size of the key column or columns for the index. If you create an index using a large key, fewer entries will fit on a page, so more pages (and possibly more levels) will be needed for the index. On a qualified select, update, or delete, the correct leaf page will be the lowest page of the tree in which one or more rows with the specified key or keys reside. A qualified operation is one that affects only specific rows that satisfy the conditions of a WHERE clause, as opposed to accessing the whole table. An index can have multiple node levels An index page above the leaf is called a node page. Each index row in node pages contains an index key (or set of keys for a composite index) and a pointer to a page at the next level for which the first key value is the same as the key value in the current index row. Leaf Level contains all key values In any index, whether clustered or nonclustered, the leaf level contains every key value, in key sequence. In SQL Server 2000, the sequence can be either ascending or descending. The sysindexes table contains all sizing, location and distribution information Any information about size of indexes or tables is stored in sysindexes. The only source of any storage location information is the sysindexes table, which keeps track of the address of the root page for every index, and the first IAM page for the index or table. There is also a column for the first page of the table, but this is not guaranteed to be reliable. SQL Server can find all pages belonging to an index or table by examining the IAM pages. Sysindexes contains a pointer to the first IAM page, and each IAM page contains a pointer to the next one. The Difference between Clustered and Nonclustered Indexes The main difference between the two types of indexes is how much information is stored at the leaf. The leaf levels of both types of indexes contain all the key values in order, but they also contain other information. Clustered Indexes The Leaf Level of a Clustered Index Is the Data The leaf level of a clustered index contains the data pages, not just the index keys. Another way to say this is that the data itself is part of the clustered index. A clustered index keeps the data in a table ordered around the key. The data pages in the table are kept in a doubly linked list called the page chain. The order of pages in the page chain, and the order of rows on the data pages, is the order of the index key or keys. Deciding which key to cluster on is an important performance consideration. When the index is traversed to the leaf level, the data itself has been retrieved, not simply pointed to. Uniqueness Is Maintained In Key Values In SQL Server 2000, all clustered indexes are unique. If you build a clustered index without specifying the unique keyword, SQL Server forces uniqueness by adding a uniqueifier to the rows when necessary. This uniqueifier is a 4-byte value added as an additional sort key to only the rows that have duplicates of their primary sort key. You can see this extra value if you use DBCC PAGE to look at the actual index rows the section on indexes internal. . Finding Rows in a Clustered Index The Leaf Level of a Clustered Index Contains the Data A clustered index is like a telephone directory in which all of the rows for customers with the same last name are clustered together in the same part of the book. Just as the organization of a telephone directory makes it easy for a person to search, SQL Server quickly searches a table with a clustered index. Because a clustered index determines the sequence in which rows are stored in a table, there can only be one clustered index for a table at a time. Performance Considerations Keeping your clustered key value small increases the number of index rows that can be placed on an index page and decreases the number of levels that must be traversed. This minimizes I/O. As we’ll see, the clustered key is duplicated in every nonclustered index row, so keeping your clustered key small will allow you to have more index fit per page in all your indexes. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname = ‘Ota’ Nonclustered Indexes The Leaf Level of a Nonclustered Index Contains a Bookmark A nonclustered index is like the index of a textbook. The data is stored in one place and the index is stored in another. Pointers indicate the storage location of the indexed items in the underlying table. In a nonclustered index, the leaf level contains each index key, plus a bookmark that tells SQL Server where to find the data row corresponding to the key in the index. A bookmark can take one of two forms:  If the table has a clustered index, the bookmark is the clustered index key for the corresponding data row. This clustered key can be multiple column if the clustered index is composite, or is defined to be non-unique.  If the table is a heap (in other words, it has no clustered index), the bookmark is a RID, which is an actual row locator in the form File#:Page#:Slot#. Finding Rows with a NC Index on a Heap Nonclustered Indexes Are Very Efficient When Searching For A Single Row After the nonclustered key at the leaf level of the index is found, only one more page access is needed to find the data row. Searching for a single row using a nonclustered index is almost as efficient as searching for a single row in a clustered index. However, if we are searching for multiple rows, such as duplicate values, or keys in a range, anything more than a small number of rows will make the nonclustered index search very inefficient. Note The query corresponding to the slide is: SELECT lastname, firstname FROM member WHERE lastname BETWEEN ‘Master’ AND ‘Rudd’ Finding Rows with a NC Index on a Clustered Table A Clustered Key Is Used as the Bookmark for All Nonclustered Indexes If the table has a clustered index, all columns of the clustered key will be duplicated in the nonclustered index leaf rows, unless there is overlap between the clustered and nonclustered key. For example, if the clustered index is on (lastname, firstname) and a nonclustered index is on firstname, the firstname value will not be duplicated in the nonclustered index leaf rows. Note The query corresponding to the slide is: SELECT lastname, firstname, phone FROM member WHERE firstname = ‘Mike’ Covering Indexes A Covering Index Provides the Fastest Data Access A covering index contains ALL the fields accessed in the query. Normally, only the columns in the WHERE clause are helpful in determining useful indexes, but for a covering index, all columns must be included. If all columns needed for the query are in the index, SQL Server never needs to access the data pages. If even one column in the query is not part of the index, the data rows must be accessed. The leaf level of an index is the only level that contains every key value, or set of key values. For a clustered index, the leaf level is the data itself, so in reality, a clustered index ALWAYS covers any query. Nevertheless, for most of our optimization discussions, we only consider nonclustered indexes. Scanning the leaf level of a nonclustered index is almost always faster than scanning a clustered index, so covering indexes are particular valuable when we need ALL the key values of a particular nonclustered index. Example: Select an aggregate value of a column with a clustered index. Suppose we have a nonclustered index on price, this query is covered: SELECT avg(price) from titles Since the clustered key is included in every nonclustered index row, the clustered key can be included in the covering. Suppose you have a nonclustered index on price and a clustered index on title_id; then this query is covered: SELECT title_id, price FROM titles WHERE price between 10 and 20 Performance Considerations In general, you do want to keep your indexes narrow. However, if you have a critical query that just is not giving you satisfactory performance no matter what you do, you should consider creating an index to cover it, or adding one or two extra columns to an existing index, so that the query will be covered. The leaf level of a nonclustered index is like a ‘mini’ clustered index, so you can have most of the benefits of clustering, even if there already is another clustered index on the table. The tradeoff to adding more, wider indexes for covering queries are the added disk space, and more overhead for updating those columns that are now part of the index. Bug In general, SQL Server will detect when a query is covered, and detect the possible covering indexes. However, in some cases, you must force SQL Server to use a covering index by including a WHERE clause, even if the WHERE clause will return ALL the rows in the table. This is SHILOH bug #352079 Steps to reproduce 1. Make copy of orders table from Northwind: USE Northwind CREATE TABLE [NewOrders] ( [OrderID] [int] NOT NULL , [CustomerID] [nchar] (5) NULL , [EmployeeID] [int] NULL , [OrderDate] [datetime] NULL , [RequiredDate] [datetime] NULL , [ShippedDate] [datetime] NULL , [ShipVia] [int] NULL , [Freight] [money] NULL , [ShipName] [nvarchar] (40) NULL, [ShipAddress] [nvarchar] (60) , [ShipCity] [nvarchar] (15) NULL, [ShipRegion] [nvarchar] (15) NULL, [ShipPostalCode] [nvarchar] (10) NULL, [ShipCountry] [nvarchar] (15) NULL ) INSERT into NewOrders SELECT * FROM Orders 2. Build nc index on OrderDate: create index dateindex on neworders(orderdate) 3. Test Query by looking at query plan: select orderdate from NewOrders The index is being scanned, as expected. 4. Build an index on orderId: create index orderid_index on neworders(orderID) 5. Test Query by looking at query plan: select orderdate from NewOrders Now the TABLE is being scanned, instead of the original index! Index Intersection Multiple Indexes Can Be Used On A Single Table In versions prior to SQL Server 7, only one index could be used for any table to process any single query. The only exception was a query involving an OR. In current SQL Server versions, multiple nonclustered indexes can each be accessed, retrieving a set of keys with bookmarks, and then the result sets can be joined on the common bookmarks. The optimizer weighs the cost of performing the unindexed join on the intermediate result sets, with the cost of only using one index, and then scanning the entire result set from that single index. Fillfactor and Performance Creating an Index with a Low Fillfactor Delays Page Splits when Inserting DBCC SHOWCONTIG will show you a low value for “Avg. Page Density” when a low fillfactor has been specified. This is good for inserts and updates, because it will delay the need to split pages to make room for new rows. It can be bad for scans, because fewer rows will be on each page, and more pages must be read to access the same amount of data. However, this cost will be minimal if the scan density value is good. Index Reorganization DBCC SHOWCONTIG Provides Lots of Information Here’s some sample output from running a basic DBCC SHOWCONTIG on the order details table in the Northwind database: DBCC SHOWCONTIG scanning 'Order Details' table... Table: 'Order Details' (325576198); index ID: 1, database ID:6 TABLE level scan performed. - Pages Scanned................................: 9 - Extents Scanned..............................: 6 - Extent Switches..............................: 5 - Avg. Pages per Extent........................: 1.5 - Scan Density [Best Count:Actual Count].......: 33.33% [2:6] - Logical Scan Fragmentation ..................: 0.00% - Extent Scan Fragmentation ...................: 16.67% - Avg. Bytes Free per Page.....................: 673.2 - Avg. Page Density (full).....................: 91.68% By default, DBCC SHOWCONTIG scans the page chain at the leaf level of the specified index and keeps track of the following values:  Average number of bytes free on each page (Avg. Bytes Free per Page)  Number of pages accessed (Pages scanned)  Number of extents accessed (Extents scanned)  Number of times a page had a lower page number than the previous page in the scan (This value for Out of order pages is not displayed, but is used for additional computations.)  Number of times a page in the scan was on a different extent than the previous page in the scan (Extent switches) SQL Server also keeps track of all the extents that have been accessed, and then it determines how many gaps are in the used extents. An extent is identified by the page number of its first page. So, if extents 8, 16, 24, 32, and 40 make up an index, there are no gaps. If the extents are 8, 16, 24, and 40, there is one gap. The value in DBCC SHOWCONTIG’s output called Extent Scan Fragmentation is computed by dividing the number of gaps by the number of extents, so in this example the Extent Scan Fragmentation is ¼, or 25 percent. A table using extents 8, 24, 40, and 56 has three gaps, and its Extent Scan Fragmentation is ¾, or 75 percent. The maximum number of gaps is the number of extents - 1, so Extent Scan Fragmentation can never be 100 percent. The value in DBCC SHOWCONTIG’s output called Logical Scan Fragmentation is computed by dividing the number of Out of order pages by the number of pages in the table. This value is meaningless in a heap. You can use either the Extent Scan Fragmentation value or the Logical Scan Fragmentation value to determine the general level of fragmentation in a table. The lower the value, the less fragmentation there is. Alternatively, you can use the value called Scan Density, which is computed by dividing the optimum number of extent switches by the actual number of extent switches. A high value means that there is little fragmentation. Scan Density is not valid if the table spans multiple files; therefore, it is less useful than the other values. SQL Server 2000 allows online defragmentation You can choose from several methods for removing fragmentation from an index. You could rebuild the index and have SQL Server allocate all new contiguous pages for you. To rebuild the index, you can use a simple DROP INDEX and CREATE INDEX combination, but in many cases using these commands is less than optimal. In particular, if the index is supporting a constraint, you cannot use the DROP INDEX command. Alternatively, you can use DBCC DBREINDEX, which can rebuild all the indexes on a table in one operation, or you can use the drop_existing clause along with CREATE INDEX. The drawback of these methods is that the table is unavailable while SQL Server is rebuilding the index. When you are rebuilding only nonclustered indexes, SQL Server takes a shared lock on the table, which means that users cannot make modifications, but other processes can SELECT from the table. Of course, those SELECT queries cannot take advantage of the index you are rebuilding, so they might not perform as well as they would otherwise. If you are rebuilding a clustered index, SQL Server takes an exclusive lock and does not allow access to the table, so your data is temporarily unavailable. SQL Server 2000 lets you defragment an index without completely rebuilding it. DBCC INDEXDEFRAG reorders the leaf-level pages into physical order as well as logical order, but using only the pages that are already allocated to the leaf level. This command does an in-place ordering, which is similar to a sorting technique called bubble sort (you might be familiar with this technique if you've studied and compared various sorting algorithms). In-place ordering can reduce logical fragmentation to 2 percent or less, making an ordered scan through the leaf level much faster. DBCC INDEXDEFRAG also compacts the pages of an index, based on the original fillfactor. The pages will not always end up with the original fillfactor, but SQL Server uses that value as a goal. The defragmentation process attempts to leave at least enough space for one average-size row on each page. In addition, if SQL Server cannot obtain a lock on a page during the compaction phase of DBCC INDEXDEFRAG, it skips the page and does not return to it. Any empty pages created as a result of compaction are removed. The algorithm SQL Server 2000 uses for DBCC INDEXDEFRAG finds the next physical page in a file belonging to the index's leaf level and the next logical page in the leaf level to swap it with. To find the next physical page, the algorithm scans the IAM pages belonging to that index. In a database spanning multiple files, in which a table or index has pages on more than one file, SQL Server handles pages on different files separately. SQL Server finds the next logical page by scanning the index's leaf level. After each page move, SQL Server drops all locks and saves the last key on the last page it moved. The next iteration of the algorithm uses the last key to find the next logical page. This process lets other users update the table and index while DBCC INDEXDEFRAG is running. Let us look at an example in which an index's leaf level consists of the following pages in the following logical order: 47 22 83 32 12 90 64 The first key is on page 47, and the last key is on page 64. SQL Server would have to scan the pages in this order to retrieve the data in sorted order. As its first step, DBCC INDEXDEFRAG would find the first physical page, 12, and the first logical page, 47. It would then swap the pages, using a temporary buffer as a holding area. After the first swap, the leaf level would look like this: 12 22 83 32 47 90 64 The next physical page is 22, which is also the next logical page, so no work would be necessary. DBCC INDEXDEFRAG would then swap the next physical page, 32, with the next logical page, 83: 12 22 32 83 47 90 64 After the next swap of 47 with 83, the leaf level would look like this: 12 22 32 47 83 90 64 Then, the defragmentation process would swap 64 with 83: 12 22 32 47 64 90 83 and 83 with 90: 12 22 32 47 64 83 90 At the end of the DBCC INDEXDEFRAG operation, the pages in the table or index are not contiguous, but their logical order matches their physical order. Now, if the pages were accessed from disk in sorted order, the head would need to move in only one direction. Keep in mind that DBCC INDEXDEFRAG uses only pages that are already part of the index's leaf level; it allocates no new pages. In addition, defragmenting a large table can take quite a while, and you will get a report every 5 minutes about the estimated percentage completed. However, except for the locks on the pages being switched, this command needs no additional locks. All the table's other pages and indexes are fully available for your applications to use during the defragmentation process. If you must completely rebuild an index because you want a new fillfactor, or if simple defragmentation is not enough because you want to remove all fragmentation from your indexes, another SQL Server 2000 improvement makes index rebuilding less of an imposition on the rest of the system. SQL Server 2000 lets you create an index in parallel—that is, using multiple processors—which drastically reduces the time necessary to perform the rebuild. The algorithm SQL Server 2000 uses, allows near-linear scaling with the number of processors you use for the rebuild, so four processors will take only one-fourth the time that one processor requires to rebuild an index. System availability increases because the length of time that a table is unavailable decreases. Note that only the SQL Server 2000 Enterprise Edition supports parallel index creation. Indexes on Views and Computed Columns Building an Index Gives the Data Physical Existence Normally, views are only logical and the rows comprising the view’s data are not generated until the view is accessed. The values for computed columns are typically not stored anywhere in the database; only the definition for the computation is stored and the computation is redone every time a computed column is accessed. The first index on a view must be a clustered index, so that the leaf level can hold all the actual rows that make up the view. Once that clustered index has been build, and the view’s data is now physical, additional (nonclustered) indexes can be built. An index on a computed column can be nonclustered, because all we need to store is the index key values. Common Prerequisites for Indexed Views and Indexes on Computed Columns In order for SQL Server to create use these special indexes, you must have the seven SET options correctly specified: ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER, ANSI_NULLS, ANSI_PADDING, ANSI_WARNING must be all ON NUMERIC_ROUNDABORT must be OFF Only deterministic expressions can be used in the definition of Indexed Views or indexes on Computed Columns. See the BOL for the list of deterministic functions and expressions. Property functions are available to check if a column or view meets the requirements and is indexable. SELECT OBJECTPROPERTY (Object_id, ‘IsIndexable’) SELECT COLUMNPROPERTY (Object_id, column_name , ‘IsIndexable’ ) Schema Binding Guarantees That Object Definition Won’t Change A view can only be indexed if it has been built with schema binding. The SQL Server Optimizer Determines If the Indexed View Can Be Used The query must request a subset of the data contained in the view. The ability of the optimizer to use the indexed view even if the view is not directly referenced is available only in SQL Server 2000 Enterprise Edition. In Standard edition, you can create indexed views, and you can select directly from them, but the optimizer will not choose to use them if they are not directly referenced. Examples of Indexed Views: The best candidates for improvement by indexed views are queries performing aggregations and joins. We will explain how the useful indexed views may be created for these two major groups of queries. The considerations are valid also for queries and indexed views using both joins and aggregations. -- Example: USE Northwind -- Identify 5 products with overall biggest discount total. -- This may be expressed for example by two different queries: -- Q1. select TOP 5 ProductID, SUM(UnitPrice*Quantity)- SUM(UnitPrice*Quantity*(1.00-Discount)) Rebate from [order details] group by ProductID order by Rebate desc --Q2. select TOP 5 ProductID, SUM(UnitPrice*Quantity*Discount) Rebate from [order details] group by ProductID order by Rebate desc --The following indexed view will be used to execute Q1. create view Vdiscount1 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount1 (ProductID) However, it will not be used by the Q2 because the indexed view does not contain the SUM(UnitPrice*Quantity*Discount) aggregate. We can construct another indexed view create view Vdiscount2 with schemabinding as select SUM(UnitPrice*Quantity) SumPrice, SUM(UnitPrice*Quantity*(1.00-Discount)) SumDiscountPrice, SUM(UnitPrice*Quantity*Discount) SumDiscoutPrice2, COUNT_BIG(*) Count, ProductID from dbo.[order details] group By ProductID create unique clustered index VDiscountInd on Vdiscount2 (ProductID) This view may be used by both Q1 and Q2. Observe that the indexed view Vdiscount2 will have the same number of rows and only one more column compared to Vdiscount1, and it may be used by more queries. In general, try to design indexed views that may be used by more queries. The following query asking for the order with the largest total discount -- Q3. select TOP 3 OrderID, SUM(UnitPrice*Quantity*Discount) OrderRebate from dbo.[order details] group By OrderID Q3 can use neither of the Vdiscount views because the column OrderID is not included in the view definition. To address this variation of the discount analysis query we may create a different indexed view, similar to the query itself. An attempt to generalize the previous indexed view Vdiscount2 so that all three queries Q1, Q2, and Q3 can take advantage of a single indexed view would require a view with both OrderID and ProductID as grouping columns. Because the OrderID, ProductID combination is unique in the original order details table the resulting view would have as many rows as the original table and we would see no savings in using such view compared to using the original table. Consider the size of the resulting indexed view. In the case of pure aggregation, the indexed view may provide no significant performance gains if its size is close to the size of the original table. Complex aggregates (STDEV, VARIANCE, AVG) cannot participate in the index view definition. However, SQL Server may use an indexed view to execute a query containing AVG aggregate. Query containing STDEV or VARIANCE cannot use indexed view to pre-compute these values. The next example shows a query producing the average price for a particular product -- Q4. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID group by ProductName, od.ProductID This is an example of indexed view that will be considered by the SQL Server to answer the Q4 create view v3 with schemabinding as select od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) Price, COUNT_BIG(*) Count, SUM(od.Quantity) Units from dbo.[order details] od group by od.ProductID go create UNIQUE CLUSTERED index iv3 on v3 (ProductID) go Observe that the view definition does not contain the table Products. The indexed view does not need to contain all tables used in the query that uses the indexed view. In addition, the following query (same as above Q4 only with one additional search condition) will use the same indexed view. Observe that the added predicate references only columns from tables not present in the v3 view definition. -- Q5. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and p.ProductName like '%tofu%' group by ProductName, od.ProductID The following query cannot use the indexed view because the added search condition od.UnitPrice>10 contains a column from the table in the view definition and the column is neither grouping column nor the predicate appears in the view definition. -- Q6. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID To contrast the Q6 case, the following query will use the indexed view v3 since the added predicate is on the grouping column of the view v3. -- Q7. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from [order details] od, Products p where od.ProductID=p.ProductID and od.ProductID in (1,2,13,41) group by ProductName, od.ProductID -- The previous query Q6 will use the following indexed view V4: create view V4 with schemabinding as select ProductName, od.ProductID, SUM(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units, COUNT_BIG(*) Count from dbo.[order details] od, dbo.Products p where od.ProductID=p.ProductID and od.UnitPrice>10 group by ProductName, od.ProductID create unique clustered index VDiscountInd on V4 (ProductName, ProductID) The same index on the view V4 will be used also for a query where a join to the table Orders is added, for example -- Q8. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 group by ProductName, od.ProductID We will show several modifications of the query Q8 and explain why such modifications cannot use the above view V4. -- Q8a. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>25 group by ProductName, od.ProductID 8a cannot use the indexed view because of the where clause mismatch. Observe that table Orders does not participate in the indexed view V4 definition. In spite of that, adding a predicate on this table will disallow using the indexed view because the added predicate may eliminate additional rows participating in the aggregates as it is shown in Q8b. -- Q8b. select ProductName, od.ProductID, AVG(od.UnitPrice*(1.00-Discount)) AvgPrice, SUM(od.Quantity) Units from dbo.[order details] od, dbo.Products p, dbo.Orders o where od.ProductID=p.ProductID and o.OrderID=od.OrderID and od.UnitPrice>10 and o.OrderDate>'01/01/1998' group by ProductName, od.ProductID Locking and Indexes In General, You Should Let SQL Server Control the Locking within Indexes The stored procedure sp_indexoption lets you manually control the unit of locking within an index. It also lets you disallow page locks or row locks within an index. Since these options are available only for indexes, there is no way to control the locking within the data pages of a heap. (But remember that if a table has a clustered index, the data pages are part of the index and are affected by the sp_indexoption setting.) The index options are set for each table or index individually. Two options, Allow Rowlocks and AllowPageLocks, are both set to TRUE initially for every table and index. If both of these options are set to FALSE for a table, only full table locks are allowed. As described in Module 4, SQL Server determines at runtime whether to initially lock rows, pages, or the entire table. The locking of rows (or keys) is heavily favored. The type of locking chosen is based on the number of rows and pages to be scanned, the number of rows on a page, the isolation level in effect, the update activity going on, the number of users on the system needing memory for their own purposes, and so on. SAP databases frequently use sp_indexoption to reduce deadlocks Setting vs. Querying In SQL Server 2000, the procedure sp_indexoption should only be used for setting an index option. To query an option, use the INDEXPROPERTY function. Lesson 2: Concepts – Statistics Statistics are the most important tool that the SQL Server query optimizer has to determine the ideal execution plan for a query. Statistics that are out of date or nonexistent seriously jeopardize query performance. SQL Server 2000 computes and stores statistics in a completely different format that all earlier versions of SQL Server. One of the improvements is an increased ability to determine which values are out of the normal range in terms of the number of occurrences. The new statistics maintenance routines are particularly good at determining when a key value has a very unusual skew of data. What You Will Learn After completing this lesson, you will be able to:  Define terms related to statistics collected by SQL Server.  Describe how statistics are maintained by SQL Server.  Discuss the autostats feature of SQL Server.  Describe how statistics are used in query optimization. Recommended Reading  Statistics Used by the Query Optimizer in Microsoft SQL Server 2000 http://msdn.microsoft.com/library/techart/statquery.htm Definitions Cardinality The cardinality means how many unique values exist in the data. Density For each index and set of column statistics, SQL Server keeps track of details about the uniqueness (or density) of the data values encountered, which provides a measure of how selective the index is. A unique index, of course, has the lowest density —by definition, each index entry can point to only one row. A unique index has a density value of 1/number of rows in the table. Density values range from 0 through 1. Highly selective indexes have density values of 0.10 or lower. For example, a unique index on a table with 8345 rows has a density of 0.00012 (1/8345). If a nonunique nonclustered index has a density of 0.2165 on the same table, each index key can be expected to point to about 1807 rows (0.2165 × 8345). This is probably not selective enough to be more efficient than just scanning the table, so this index is probably not useful. Because driving the query from a nonclustered index means that the pages must be retrieved in index order, an estimated 1807 data page accesses (or logical reads) are needed if there is no clustered index on the table and the leaf level of the index contains the actual RID of the desired data row. The only time a data page doesn’t need to be reaccessed is when the occasional coincidence occurs in which two adjacent index entries happen to point to the same data page. In general, you can think of density as the average number of duplicates. We can also talk about the term ‘join density’, which applies to the average number of duplicates in the foreign key column. This would answer the question: in this one-to-many relationship, how many is ‘many’? Selectivity In general selectivity applies to a particular data value referenced in a WHERE clause. High selectivity means that only a small percentage of the rows satisfy the WHERE clause filter, and a low selectivity means that many rows will satisfy the filter. For example, in an employees table, the column employee_id is probably very selective, and the column gender is probably not very selective at all. Statistics Statistics are a histogram consisting of an even sampling of values for a column or for an index key (or the first column of the key for a composite index) based on the current data. The histogram is stored in the statblob field of the sysindexes table, which is of type image. (Remember that image data is actually stored in structures separate from the data row itself. The data row merely contains a pointer to the image data. For simplicity’s sake, we’ll talk about the index statistics as being stored in the image field called statblob.) To fully estimate the usefulness of an index, the optimizer also needs to know the number of pages in the table or index; this information is stored in the dpages column of sysindexes. During the second phase of query optimization, index selection, the query optimizer determines whether an index exists for a columns in your WHERE clause, assesses the index’s usefulness by determining the selectivity of the clause (that is, how many rows will be returned), and estimates the cost of finding the qualifying rows. Statistics for a single column index consist of one histogram and one density value. The multicolumn statistics for one set of columns in a composite index consist of one histogram for the first column in the index and density values for each prefix combination of columns (including the first column alone). The fact that density information is kept for all columns helps the optimizer decide how useful the index is for joins. Suppose, for example, that an index is composed of three key fields. The density on the first column might be 0.50, which is not too useful. However, as you look at more key columns in the index, the number of rows pointed to is fewer than (or in the worst case, the same as) the first column, so the density value goes down. If you are looking at both the first and second columns, the density might be 0.25, which is somewhat better. Moreover, if you examine three columns, the density might be 0.03, which is highly selective. It does not make sense to refer to the density of only the second column. The lead column density is always needed. Statistics Maintenance Statistics Information Tracks the Distribution of Key Values SQL Server statistics is basically a histogram that contains up to 200 values of a given key column. In addition to the histogram, the statblob field contains the following information:  The time of the last statistics collection  The number of rows used to produce the histogram and density information  The average key length  Densities for other combinations of columns In the statblob column, up to 200 sample values are stored; the range of key values between each sample value is called a step. The sample value is the endpoint of the range. Three values are stored along with each step: a value called EQ_ROWS, which is the number of rows that have a value equal to that sample value; a value called RANGE_ROWS, which specifies how many other values are inside the range (between two adjacent sample values); and the number of distinct values, or RANGE_DENSITY of the range. DBCC SHOW_STATISTICS The DBCC SHOW_STATISTICS output shows us the first two of these three values, but not the range density. The RANGE_DENSITY is instead used to compute two additional values:  DISTINCT_RANGE_ROWS—the number of distinct rows inside this range (not counting the RANGE_HI_KEY value itself. This is computed as 1/RANGE_DENSITY.  AVG_RANGE_ROWS—the average number of rows per distinct value, computed as RANGE_DENSITY * RANGE_ROWS. In addition to statistics on indexes, SQL Server can also keep track of statistics on columns with no indexes. Knowing the density, or the likelihood of a particular value occurring, can help the optimizer determine an optimum processing strategy, even if SQL Server can’t use an index to actually locate the values. Statistics on Columns Column statistics can be useful for two main purposes  When the SQL Server optimizer is determining the optimal join order, it frequently is best to have the smaller input processed first. By ‘input’ we mean table after all filters in the WHERE clause have been applied. Even if there is no useful index on a column in the WHERE clause, statistics could tell us that only a few rows will quality, and those the resulting input will be very small.  The SQL Server query optimizer can use column statistics on non-initial columns in a composite nonclustered index to determine if scanning the leaf level to obtain the bookmarks will be an efficient processing strategy. For example, in the member table in the credit database, the first name column is almost unique. Suppose we have a nonclustered index on (lastname, firstname), and we issue this query: select * from member where firstname = 'MPRO' In this case, statistics on the firstname column would indicate very few rows satisfying this condition, so the optimizer will choose to scan the nonclustered index, since it is smaller than the clustered index (the table). The small number of bookmarks will then be followed to retrieve the actual data. Manually Updating Statistics You can also manually force statistics to be updated in one of two ways. You can run the UPDATE STATISTICS command on a table or on one specific index or column statistics, or you can also execute the procedure sp_updatestats, which runs UPDATE STATISTICS against all user-defined tables in the current database. You can create statistics on unindexed columns using the CREATE STATISTICS command or by executing sp_createstats, which creates single-column statistics for all eligible columns for all user tables in the current database. This includes all columns except computed columns and columns of the ntext, text, or image datatypes, and columns that already have statistics or are the first column of an index. Autostats By Default SQL Server Will Update Statistics on Any Index or Column as Needed Every database is created with the database options auto create statistics and auto update statistics set to true, but you can turn either one off. You can also turn off automatic updating of statistics for a specific table in one of two ways:  UPDATE STATISTICS In addition to updating the statistics, the option WITH NORECOMPUTE indicates that the statistics should not be automatically recomputed in the future. Running UPDATE STATISTICS again without the WITH NORECOMPUTE option enables automatic updates.  sp_autostats This procedure sets or unsets a flag for a table to indicate that statistics should or should not be updated automatically. You can also use this procedure with only the table name to find out whether the table is set to automatically have its index statistics updated. ' However, setting the database option auto update statistics to FALSE overrides any individual table settings. In other words, no automatic updating of statistics takes place. This is not a recommended practice unless thorough testing has shown you that you do not need the automatic updates or that the performance overhead is more than you can afford. Trace Flags Trace flag 205 – reports recompile due to autostats. Trace flag 8721 – writes information to the errorlog when AutoStats has been run. For more information, see the following Knowledge Base article: Q195565 “INF: How SQL Server 7.0 Autostats Work.” Statistics and Performance The Performance Penalty of NOT Having Up-To-Date Statistics Far Outweighs the Benefit of Avoiding Automatic Updating Autostats should be turned off only after thorough testing shows it to be necessary. Because autostats only forces a recompile after a certain number or percentage of rows has been changed, you do not have to make any adjustments for a read-only database. Lesson 3: Concepts – Query Optimization What You Will Learn After completing this lesson, you will be able to:  Describe the phases of query optimization.  Discuss how SQL Server estimates the selectivity of indexes and column and how this estimate is used in query optimization. Recommended Reading  Chapter 15: “The Query Processor”, Inside SQL Server 2000 by Kalen Delaney  Chapter 16: “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney  Whitepaper about SQL Server Query Processor Architecture by Hal Berenson and Kalen Delaney http://msdn.microsoft.com/library/backgrnd/html/sqlquerproc.htm Phases of Query Optimization Query Optimization Involves several phases Trivial Plan Optimization Optimization itself goes through several steps. The first step is something called Trivial Plan Optimization. The whole idea of trivial plan optimization is that cost based optimization is a bit expensive to run. The optimizer can try a great many possible variations trying to find the cheapest plan. If SQL Server knows that there is only one really viable plan for a query, it could avoid a lot of work. A prime example is a query that consists of an INSERT with a VALUES clause. There is only one possible plan. Another example is a SELECT where all the columns are in a unique covering index, and that index is the only one that is useable. There is no other index that has that set of columns in it. These two examples are cases where SQL Server should just generate the plan and not try to find something better. The trivial plan optimizer finds the really obvious plans, which are typically very inexpensive. In fact, all the plans that get through the autoparameterization template result in plans that the trivial plan optimizer can find. Between those two mechanisms, the plans that are simple tend to be weeded out earlier in the process and do not pay a lot of the compilation cost. This is a good thing, because the number of potential plans in 7.0 went up astronomically as SQL Server added hash joins, merge joins and index intersections, to its list of processing techniques. Simplification and Statistics Loading If a plan is not found by the trivial plan optimizer, SQL Server can perform some simplifications, usually thought of as syntactic transformations of the query itself, looking for commutative properties and operations that can be rearranged. SQL Server can do constant folding, and other operations that do not require looking at the cost or analyzing what indexes are, but that can result in a more efficient query. SQL Server then loads up the metadata including the statistics information on the indexes, and then the optimizer goes through a series of phases of cost based optimization. Cost Based Optimization Phases The cost based optimizer is designed as a set of transformation rules that try various permutations of indexes and join strategies. Because of the number of potential plans in SQL Server 7.0 and SQL Server 2000, if the optimizer just ran through all the combinations and produced a plan, the optimization process would take a very long time to run. Therefore, optimization is broken up into phases. Each phase is a set of rules. After each phase is run, the cost of any resulting plan is examined, and if SQL Server determines that the plan is cheap enough, that plan is kept and executed. If the plan is not cheap enough, the optimizer runs the next phase, which is another set of rules. In the vast majority of cases, a good plan will be found in the preliminary phases. Typically, if the plan that a query would have had in SQL Server 6.5 is also the optimal plan in SQL Server 7.0 and SQL Server 2000, the plan will tend to be found either by the trivial plan optimizer or by the first phase of the cost based optimizer. The rules were intentionally organized to try to make that be true. The plan will probably consist of using a single index and using nested loops. However, every once in a while, because of lack of statistical information, or some other nuance, the optimizer will have to proceed with the later phases of optimization. Sometimes this is because there is a real possibility that the optimizer could find a better plan. When a plan is found, it becomes the optimizer’s output, and then SQL Server goes through all the caching mechanisms that we have already discussed in Module 5. Full Optimization At some point, the optimizer determines that it has gone through enough preliminary phases, and it reverts to a phase called full optimization. If the optimizer goes through all the preliminary phases, and still has not found a cheap plan, it examines the cost for the plan that it has so far. If the cost is above the threshold, the optimizer goes into a phase called full optimization. This threshold is configurable, as the configuration option ‘cost threshold for parallelism’. The full optimization phase assumes that this plan should be run this in parallel. If the machine is very busy, the plan will end up running it in serial, but the optimizer has a goal to produce a good parallel. If the cost is below the threshold (or a single processor machine), the full optimization phase just uses a brute force method to find a serial plan. Selectivity Estimation Selectivity Is One of The Most Important Pieces of Information One of the most import things the optimizer needs to know is the number of rows from any table that will meet all the conditions in the query. If there are no restrictions on a table, and all the rows will be needed, the optimizer can determine the number of rows from the sysindexes table. This number is not absolutely guaranteed to be accurate, but it is the number the optimizer uses. If there is a filter on the table in a WHERE clause, the optimizer needs statistics information. Indexes automatically maintain statistics, and the optimizer will use these values to determine the usefulness of the index. If there is no index on the column involved in the filter, then column statistics can be used or generated. Optimizing Search Arguments In General, the Filters in the WHERE Clause Determine Which Indexes Will Be Useful If an indexed column is referenced in a Search Argument (SARG), the optimizer will analyze the cost of using that index. A SARG has the form:  column <operator> value  value <operator> column  Operator must be one of =, >, >= <, <= The value can be a constant, an operation, or a variable. Some functions also will be treated as SARGs. These queries have SARGs, and a nonclustered index on firstname will be used in most cases: select * from member where firstname < 'AKKG' select * from member where firstname = substring('HAAKGALSFJA', 2,5) select * from member where firstname = 'AA' + 'KG' declare @name char(4) set @name = 'AKKG' select * from member where firstname < @name Not all functions can be used in SARGs. select * from charge where charge_amt < 2*2 select * from charge where charge_amt < sqrt(16) Compare these queries to ones using = instead of <. With =, the optimizer can use the density information to come up with a good row estimate, even if it’s not going to actually perform the function’s calculations. A filter with a variable is usually a SARG The issue is, can the optimizer come up with useful costing information? A filter with a variable is not a SARG if the variable is of a different datatype, and the column must be converted to the variable’s datatype For more information, see the following Knowledge Base article: Q198625 Enter Title of KB Article Here Use credit go CREATE TABLE [member2] ( [member_no] [smallint] NOT NULL , [lastname] [shortstring] NOT NULL , [firstname] [shortstring] NOT NULL , [middleinitial] [letter] NULL , [street] [shortstring] NOT NULL , [city] [shortstring] NOT NULL , [state_prov] [statecode] NOT NULL , [country] [countrycode] NOT NULL , [mail_code] [mailcode] NOT NULL ) GO insert into member2 select member_no, lastname, firstname, middleinitial, street, city, state_prov, country, mail_code from member alter table member2 add constraint pk_member2 primary key clustered (lastname, member_no, firstname, country) declare @id int set @id = 47 update member2 set city = city + ' City', state_prov = state_prov + ' State' where lastname = 'Barr' and member_no = @id and firstname = 'URQYJBFVRRPWKVW' and country = 'USA' These queries don’t have SARGs, and a table scan will be done: select * from member where substring(lastname, 1,2) = ‘BA’ Some non-SARGs can be converted select * from member where lastname like ‘ba%’ In some cases, you can rewrite your query to turn a non-SARG into a SARG; for example, you can rewrite the substring query above and the LIKE query that follows it. Join Order and Types of Joins Join Order and Strategy Is Determined By the Optimizer The execution plan output will display the join order from top to bottom; i.e. the table listed on top is the first one accessed in a join. You can override the optimizer’s join order decision in two ways:  OPTION (FORCE ORDER) applies to one query  SET FORCEPLAN ON applies to entire session, until set OFF If either of these options is used, the join order is determined by the order the tables are listed in the query’s FROM clause, and no optimizer on JOIN ORDER is done. Forcing the JOIN order may force a particular join strategy. For example, in most outer join operations, the outer table is processed first, and a nested loops join is done. However, if you force the inner table to be accessed first, a merge join will need to be done. Compare the query plan for this query with and without the FORCE ORDER hint: select * from titles right join publishers on titles.pub_id = publishers.pub_id -- OPTION (FORCE ORDER) Nested Loop Join A nested iteration is when the query optimizer constructs a set of nested loops, and the result set grows as it progresses through the rows. The query optimizer performs the following steps. 1. Finds a row from the first table. 2. Uses that row to scan the next table. 3. Uses the result of the previous table to scan the next table. Evaluating Join Combinations The query optimizer automatically evaluates at least four or more possible join combinations, even if those combinations are not specified in the join predicate. You do not have to add redundant clauses. The query optimizer balances the cost and uses statistics to determine the number of join combinations that it evaluates. Evaluating every possible join combination is inefficient and costly. Evaluating Cost of Query Performance When the query optimizer performs a nested join, you should be aware that certain costs are incurred. Nested loop joins are far superior to both merge joins and hash joins when executing small transactions, such as those affecting only a small set of rows. The query optimizer:  Uses nested loop joins if the outer input is quite small and the inner input is indexed and quite large.  Uses the smaller input as the outer table.  Requires that a useful index exist on the join predicate for the inner table.  Always uses a nested loop join strategy if the join operation uses an operator other than an equality operator. Merge Joins The columns of the join conditions are used as inputs to process a merge join. SQL Server performs the following steps when using a merge join strategy: 1. Gets the first input values from each input set. 2. Compares input values. 3. Performs a merge algorithm. • If the input values are equal, the rows are returned. • If the input values are not equal, the lower value is discarded, and the next input value from that input is used for the next comparison. 4. Repeats the process until all of the rows from one of the input sets have been processed. 5. Evaluates any remaining search conditions in the query and returns only rows that qualify. Note Only one pass per input is done. The merge join operation ends after all of the input values of one input have been evaluated. The remaining values from the other input are not processed. Requires That Joined Columns Are Sorted If you execute a query with join operations, and the joined columns are in sorted order, the query optimizer processes the query by using a merge join strategy. A merge join is very efficient because the columns are already sorted, and it requires fewer page I/O. Evaluates Sorted Values For the query optimizer to use the merge join, the inputs must be sorted. The query optimizer evaluates sorted values in the following order: 1. Uses an existing index tree (most typical). The query optimizer can use the index tree from a clustered index or a covered nonclustered index. 2. Leverages sort operations that the GROUP BY, ORDER BY, and CUBE clauses use. The sorting operation only has to be performed once. 3. Performs its own sort operation in which a SORT operator is displayed when graphically viewing the execution plan. The query optimizer does this very rarely. Performance Considerations Consider the following facts about the query optimizer's use of the merge join:  SQL Server performs a merge join for all types of join operations (except cross join or full join operations), including UNION operations.  A merge join operation may be a one-to-one, one-to-many, or many-to-many operation. If the merge join is a many-to-many operation, SQL Server uses a temporary table to store the rows. If duplicate values from each input exist, one of the inputs rewinds to the start of the duplicates as each duplicate value from the other input is processed.  Query performance for a merge join is very fast, but the cost can be high if the query optimizer must perform its own sort operation. If the data volume is large and the desired data can be obtained presorted from existing Balanced-Tree (B-Tree) indexes, merge join is often the fastest join algorithm.  A merge join is typically used if the two join inputs have a large amount of data and are sorted on their join columns (for example, if the join inputs were obtained by scanning sorted indexes).  Merge join operations can only be performed with an equality operator in the join predicate. Hashing is a strategy for dividing data into equal sets of a manageable size based on a given property or characteristic. The grouped data can then be used to determine whether a particular data item matches an existing value. Note Duplicate data or ranges of data are not useful for hash joins because the data is not organized together or in order. When a Hash Join Is Used The query optimizer uses a hash join option when it estimates that it is more efficient than processing queries by using a nested loop or merge join. It typically uses a hash join when an index does not exist or when existing indexes are not useful. Assigns a Build and Probe Input The query optimizer assigns a build and probe input. If the query optimizer incorrectly assigns the build and probe input (this may occur because of imprecise density estimates), it reverses them dynamically. The ability to change input roles dynamically is called role reversal. Build input consists of the column values from a table with the lowest number of rows. Build input creates a hash table in memory to store these values. The hash bucket is a storage place in the hash table in which each row of the build input is inserted. Rows from one of the join tables are placed into the hash bucket where the hash key value of the row matches the hash key value of the bucket. Hash buckets are stored as a linked list and only contain the columns that are needed for the query. A hash table contains hash buckets. The hash table is created from the build input. Probe input consists of the column values from the table with the most rows. Probe input is what the build input checks to find a match in the hash buckets. Note The query optimizer uses column or index statistics to help determine which input is the smaller of the two. Processing a Hash Join The following list is a simplified description of how the query optimizer processes a hash join. It is not intended to be comprehensive because the algorithm is very complex. SQL Server: 1. Reads the probe input. Each probe input is processed one row at a time. 2. Performs the hash algorithm against each probe input and generates a hash key value. 3. Finds the hash bucket that matches the hash key value. 4. Accesses the hash bucket and looks for the matching row. 5. Returns the row if a match is found. Performance Considerations Consider the following facts about the hash joins that the query optimizer uses:  Similar to merge joins, a hash join is very efficient, because it uses hash buckets, which are like a dynamic index but with less overhead for combining rows.  Hash joins can be performed for all types of join operations (except cross join operations), including UNION and DIFFERENCE operations.  A hash operator can remove duplicates and group data, such as SUM (salary) GROUP BY department. The query optimizer uses only one input for both the build and probe roles.  If join inputs are large and are of similar size, the performance of a hash join operation is similar to a merge join with prior sorting. However, if the size of the join inputs is significantly different, the performance of a hash join is often much faster.  Hash joins can process large, unsorted, non-indexed inputs efficiently. Hash joins are useful in complex queries because the intermediate results: • Are not indexed (unless explicitly saved to disk and then indexed). • Are often not sorted for the next operation in the execution plan.  The query optimizer can identify incorrect estimates and make corrections dynamically to process the query more efficiently.  A hash join reduces the need for database denormalization. Denormalization is typically used to achieve better performance by reducing join operations despite redundancy, such as inconsistent updates. Hash joins give you the option to vertically partition your data as part of your physical database design. Vertical partitioning represents groups of columns from a single table in separate files or indexes. Subquery Performance Joins Are Not Inherently Better Than Subqueries Here is an example showing three different ways to update a table, using a second table for lookup purposes. The first uses a JOIN with the update, the second uses a regular introduced with IN, and the third uses a correlated subquery. All three yield nearly identical performance. Note Note that performance comparisons cannot just be made based on I/Os. With HASHING and MERGING techniques, the number of reads may be the same for two queries, yet one may take a lot longer and use more memory resources. Also, always be sure to monitor statistics time. Suppose you want to add a 5 percent discount to order items in the Order Details table for which the supplier is Exotic Liquids, whose supplierid is 1. -- JOIN solution BEGIN TRAN UPDATE OD SET discount = discount + 0.05 FROM [Order Details] AS OD JOIN Products AS P ON OD.productid = P.productid WHERE supplierid = 1 ROLLBACK TRAN -- Regular subquery solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE productid IN (SELECT productid FROM Products WHERE supplierid = 1) ROLLBACK TRAN -- Correlated Subquery Solution BEGIN TRAN UPDATE [Order Details] SET discount = discount + 0.05 WHERE EXISTS(SELECT supplierid FROM Products WHERE [Order Details].productid = Products.productid AND supplierid = 1) ROLLBACK TRAN Internally, Your Join May Be Rewritten SQL Server’s query processor had many different ways of resolving your JOIN expressions. Subqueries may be converted to a JOIN with an implied distinct, which may result in a logical operator of SEMI JOIN. Compare the plans of the first two queries: USE credit select member_no from member where member_no in (select member_no from charge) select distinct m.member_no from member m join charge c on m.member_no = c.member_no The second query uses a HASH MATCH as the final step to remove the duplicates. The first query only had to do a semi join. For these queries, although the I/O values are the same, the first query (with the subquery) runs much faster (almost twice as fast). Another similar looking join is

2009-11-27

微软内部资料-SQL性能优化3

Contents Overview 1 Lesson 1: Concepts – Locks and Lock Manager 3 Lesson 2: Concepts – Batch and Transaction 31 Lesson 3: Concepts – Locks and Applications 51 Lesson 4: Information Collection and Analysis 63 Lesson 5: Concepts – Formulating and Implementing Resolution 81 Module 4: Troubleshooting Locking and Blocking Overview At the end of this module, you will be able to:  Discuss how lock manager uses lock mode, lock resources, and lock compatibility to achieve transaction isolation.  Describe the various transaction types and how transactions differ from batches.  Describe how to troubleshoot blocking and locking issues.  Analyze the output of blocking scripts and Microsoft® SQL Server™ Profiler to troubleshoot locking and blocking issues.  Formulate hypothesis to resolve locking and blocking issues. Lesson 1: Concepts – Locks and Lock Manager This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Describe locking architecture used by SQL Server.  Identify the various lock modes used by SQL Server.  Discuss lock compatibility and concurrent access.  Identify different types of lock resources.  Discuss dynamic locking and lock escalation.  Differentiate locks, latches, and other SQL Server internal “locking” mechanism such as spinlocks and other synchronization objects. Recommended Reading  Chapter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  SOX000821700049 – SQL 7.0 How to interpret lock resource Ids  SOX000925700237 – TITLE: Lock escalation in SQL 7.0  SOX001109700040 – INF: Queries with PREFETCH in the plan hold lock until the end of transaction Locking Concepts Delivery Tip Prior to delivering this material, test the class to see if they fully understand the different isolation levels. If the class is not confident in their understanding, review appendix A04_Locking and its accompanying PowerPoint® file. Transactions in SQL Server provide the ACID properties: Atomicity A transaction either commits or aborts. If a transaction commits, all of its effects remain. If it aborts, all of its effects are undone. It is an “all or nothing” operation. Consistency An application should maintain the consistency of a database. For example, if you defer constraint checking, it is your responsibility to ensure that the database is consistent. Isolation Concurrent transactions are isolated from the updates of other incomplete transactions. These updates do not constitute a consistent state. This property is often called serializability. For example, a second transaction traversing the doubly linked list mentioned above would see the list before or after the insert, but it will see only complete changes. Durability After a transaction commits, its effects will persist even if there are system failures. Consistency and isolation are the most important in describing SQL Server’s locking model. It is up to the application to define what consistency means, and isolation in some form is needed to achieve consistent results. SQL Server uses locking to achieve isolation. Definition of Dependency: A set of transactions can run concurrently if their outputs are disjoint from the union of one another’s input and output sets. For example, if T1 writes some object that is in T2’s input or output set, there is a dependency between T1 and T2. Bad Dependencies These include lost updates, dirty reads, non-repeatable reads, and phantoms. ANSI SQL Isolation Levels An isolation level determines the degree to which data is isolated for use by one process and guarded against interference from other processes. Prior to SQL Server 7.0, REPEATABLE READ and SERIALIZABLE isolation levels were synonymous. There was no way to prevent non-repeatable reads while not preventing phantoms. By default, SQL Server 2000 operates at an isolation level of READ COMMITTED. To make use of either more or less strict isolation levels in applications, locking can be customized for an entire session by setting the isolation level of the session with the SET TRANSACTION ISOLATION LEVEL statement. To determine the transaction isolation level currently set, use the DBCC USEROPTIONS statement, for example: USE pubs GO SET TRANSACTION ISOLATION LEVEL REPEATABLE READ GO DBCC USEROPTIONS GO Multigranular Locking Multigranular Locking In our example, if one transaction (T1) holds an exclusive lock at the table level, and another transaction (T2) holds an exclusive lock at the row level, each of the transactions believe they have exclusive access to the resource. In this scenario, since T1 believes it locks the entire table, it might inadvertently make changes to the same row that T2 thought it has locked exclusively. In a multigranular locking environment, there must be a way to effectively overcome this scenario. Intent lock is the answer to this problem. Intent Lock Intent Lock is the term used to mean placing a marker in a higher-level lock queue. The type of intent lock can also be called the multigranular lock mode. An intent lock indicates that SQL Server wants to acquire a shared (S) lock or exclusive (X) lock on some of the resources lower down in the hierarchy. For example, a shared intent lock placed at the table level means that a transaction intends on placing shared (S) locks on pages or rows within that table. Setting an intent lock at the table level prevents another transaction from subsequently acquiring an exclusive (X) lock on the table containing that page. Intent locks improve performance because SQL Server examines intent locks only at the table level to determine whether a transaction can safely acquire a lock on that table. This removes the requirement to examine every row or page lock on the table to determine whether a transaction can lock the entire table. Lock Mode The code shown in the slide represents how the lock mode is stored internally. You can see these codes by querying the master.dbo.spt_values table: SELECT * FROM master.dbo.spt_values WHERE type = N'L' However, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. For example, value of req_mode = 3 represents the Shared lock mode rather than the Schema Modification lock mode. Lock Compatibility These locks can apply at any coarser level of granularity. If a row is locked, SQL Server will apply intent locks at both the page and the table level. If a page is locked, SQL Server will apply an intent lock at the table level. SIX locks imply that we have shared access to a resource and we have also placed X locks at a lower level in the hierarchy. SQL Server never asks for SIX locks directly, they are always the result of a conversion. For example, suppose a transaction scanned a page using an S lock and then subsequently decided to perform a row level update. The row would obtain an X lock, but now the page would require an IX lock. The resultant mode on the page would be SIX. Another type of table lock is a schema stability lock (Sch-S) and is compatible with all table locks except the schema modification lock (Sch-M). The schema modification lock (Sch-M) is incompatible with all table locks. Locking Resources Delivery Tip Note the differences between Key and Key Range locks. Key Range locks will be covered in a couple of slides. SQL Server can lock these resources: Item Description DB A database. File A database file Index An entire index of a table. Table An entire table, including all data and indexes. Extent A contiguous group of data pages or index pages. Page An 8-KB data page or index page. Key Row lock within an index. Key-range A key-range. Used to lock ranges between records in a table to prevent phantom insertions or deletions into a set of records. Ensures serializable transactions. RID A Row Identifier. Used to individually lock a single row within a table. Application A lock resource defined by an application. The lock manager knows nothing about the resource format. It simply compares the 'strings' representing the lock resources to determine whether it has found a match. If a match is found, it knows that resource is already locked. Some of the resources have “sub-resources.” The followings are sub-resources displayed by the sp_lock output: Database Lock Sub-Resources: Full Database Lock (default) [BULK-OP-DB] – Bulk Operation Lock for Database [BULK-OP-LOG] – Bulk Operation Lock for Log Table Lock Sub-Resources: Full Table Lock (default) [UPD-STATS] – Update statistics Lock [COMPILE] – Compile Lock Index Lock sub-Resources: Full Index Lock (default) [INDEX_ID] – Index ID Lock [INDEX_NAME] – Index Name Lock [BULK_ALLOC] – Bulk Allocation Lock [DEFRAG] – Defragmentation Lock For more information, see also… SOX000821700049 SQL 7.0 How to interpret lock resource Ids Lock Resource Block The resource type has the following resource block format: Resource Type (Code) Content DB (2) Data 1: sub-resource; Data 2: 0; Data 3: 0 File (3) Data 1: File ID; Data 2: 0; Data 3: 0 Index (4) Data 1: Object ID; Data 2: sub-resource; Data 3: Index ID Table (5) Data 1: Object ID; Data 2: sub-resource; Data 3: 0. Page (6) Data 1: Page Number; Data 3: 0. Key (7) Data 1: Object ID; Data 2: Index ID; Data 3: Hashed Key Extent (8) Data 1: Extent ID; Data 3: 0. RID (9) Data 1: RID; Data 3: 0. Application (10) Data 1: Application resource name The rsc_bin column of master..syslockinfo contains the resource block in hexadecimal format. For an example of how to decode value from this column using the information above, let us assume we have the following value: 0x000705001F83D775010002014F0BEC4E With byte swapping within each field, this can be decoded as: Byte 0: Flag – 0x00 Byte 1: Resource Type – 0x07 (Key) Byte 2-3: DBID – 0x0005 Byte 4-7: ObjectID – 0x 75D7831F (1977058079) Byte 8-9: IndexID – 0x0001 Byte 10-16: Hash Key value – 0x 02014F0BEC4E For more information about how to decode this value, see also… Inside SQL Server 2000, pages 803 and 806. Key Range Locking Key Range Locking To support SERIALIZABLE transaction semantics, SQL Server needs to lock sets of rows specified by a predicate, such as WHERE salary BETWEEN 30000 AND 50000 SQL Server needs to lock data that does not exist! If no rows satisfy the WHERE condition the first time the range is scanned, no rows should be returned on any subsequent scans. Key range locks are similar to row locks on index keys (whether clustered or not). The locks are placed on individual keys rather than at the node level. The hash value consists of all the key components and the locator. So, for a nonclustered index over a heap, where columns c1 and c2 where indexed, the hash would contain contributions from c1, c2 and the RID. A key range lock applied to a particular key means that all keys between the value locked and the next value would be locked for all data modification. Key range locks can lock a slightly larger range than that implied by the WHERE clause. Suppose the following select was executed in a transaction with isolation level SERIALIZABLE: SELECT * FROM members WHERE first_name between ‘Al’ and ‘Carl’ If 'Al', 'Bob', and 'Dave' are index keys in the table, the first two of these would acquire key range locks. Although this would prevent anyone from inserting either 'Alex' or 'Ben', it would also prevent someone from inserting 'Dan', which is not within the range of the WHERE clause. Prior to SQL Server 7.0, page locking was used to prevent phantoms by locking the entire set of pages on which the phantom would exist. This can be too conservative. Key Range locking lets SQL Server lock only a much more restrictive area of the table. Impact Key-range locking ensures that these scenarios are SERIALIZABLE:  Range scan query  Singleton fetch of nonexistent row  Delete operation  Insert operation However, the following conditions must be satisfied before key-range locking can occur:  The transaction-isolation level must be set to SERIALIZABLE.  The operation performed on the data must use an index range access. Range locking is activated only when query processing (such as the optimizer) chooses an index path to access the data. Key Range Lock Mode Again, the req_mode column of master.dbo.syslockinfo has lock mode code that is one less than the code values shown here. Dynamic Locking When modifying individual rows, SQL Server typically would take row locks to maximize concurrency (for example, OLTP, order-entry application). When scanning larger volumes of data, it would be more appropriate to take page or table locks to minimize the cost of acquiring locks (for example, DSS, data warehouse, reporting). Locking Decision The decision about which unit to lock is made dynamically, taking many factors into account, including other activity on the system. For example, if there are multiple transactions currently accessing a table, SQL Server will tend to favor row locking more so than it otherwise would. It may mean the difference between scanning the table now and paying a bit more in locking cost, or having to wait to acquire a more coarse lock. A preliminary locking decision is made during query optimization, but that decision can be adjusted when the query is actually executed. Lock Escalation When the lock count for the transaction exceeds and is a multiple of ESCALATION_THRESHOLD (1250), the Lock Manager attempts to escalate. For example, when a transaction acquired 1250 locks, lock manager will try to escalate. The number of locks held may continue to increase after the escalation attempt (for example, because new tables are accessed, or the previous lock escalation attempts failed due to incompatible locks held by another spid). If the lock count for this transaction reaches 2500 (1250 * 2), Lock Manager will attempt escalation again. The Lock Manager looks at the lock memory it is using and if it is more than 40 percent of SQL Server’s allocated buffer pool memory, it tries to find a scan (SDES) where no escalation has already been performed. It then repeats the search operation until all scans have been escalated or until the memory used drops under the MEMORY_LOAD_ESCALATION_THRESHOLD (40%) value. If lock escalation is not possible or fails to significantly reduce lock memory footprint, SQL Server can continue to acquire locks until the total lock memory reaches 60 percent of the buffer pool (MAX_LOCK_RESOURCE_MEMORY_PERCENTAGE=60). Lock escalation may be also done when a single scan (SDES) holds more than LOCK_ESCALATION_THRESHOLD (765) locks. There is no lock escalation on temporary tables or system tables. Trace Flag 1211 disables lock escalation. Important Do not relay this to the customer without careful consideration. Lock escalation is a necessary feature, not something to be avoided completely. Trace flags are global and disabling lock escalation could lead to out of memory situations, extremely poor performing queries, or other problems. Lock escalation tracing can be seen using the Profiler or with the general locking trace flag, -T1200. However, Trace Flag 1200 shows all lock activity so it should not be usable on a production system. For more information, see also… SOX000925700237 “TITLE: SQL 7.0 Lock escalation in SQL 7.0” Lock Timeout Application Lock Timeout An application can set lock timeout for a session with the SET option: SET LOCK_TIMEOUT N where N is a number of milliseconds. A value of -1 means that there will be no timeout, which is equivalent to the version 6.5 behavior. A value of 0 means that there will be no waiting; if a process finds a resource locked, it will generate error message 1222 and continue with the next statement. The current value of LOCK_TIMEOUT is stored in the global variable @@lock_timeout. Note After a lock timeout any transaction containing the statement, is rolled back or canceled by SQL Server 2000 (bug#352640 was filed). This behavior is different from that of SQL Server 7.0. With SQL Server 7.0, the application must have an error handler that can trap error 1222 and if an application does not trap the error, it can proceed unaware that an individual statement within a transaction has been canceled, and errors can occur because statements later in the transaction may depend on the statement that was never executed. Bug#352640 is fixed in hotfix build 8.00.266 whereby a lock timeout will only Internal Lock Timeout At time, internal operations within SQL Server will attempt to acquire locks via lock manager. Typically, these lock requests are issued with “no waiting.” For example, the ghost record processing might try to clean up rows on a particular page, and before it can do that, it needs to lock the page. Thus, the ghost record manager will request a page lock with no wait so that if it cannot lock the page, it will just move on to other pages; it can always come back to this page later. If you look at SQL Profiler Lock: Timeout events, internal lock timeout typically have a duration value of zero. Lock Duration Lock Mode and Transaction Isolation Level For REPEATABLE READ transaction isolation level, update locks are held until data is read and processed, unless promoted to exclusive locks. "Data is processed" means that we have decided whether the row in question matched the search criteria; if not then the update lock is released, otherwise, we get an exclusive lock and make the modification. Consider the following query: use northwind go dbcc traceon(3604, 1200, 1211) -- turn on lock tracing -- and disable escalation go set transaction isolation level repeatable read begin tran update dbo.[order details] set discount = convert (real, discount) where discount = 0.0 exec sp_lock Update locks are promoted to exclusive locks when there is a match; otherwise, the update lock is released. The sp_lock output verifies that the SPID does not hold any update locks or shared locks at the end of the query. Lock escalation is turned off so that exclusive table lock is not held at the end. Warning Do not use trace flag 1200 in a production environment because it produces a lot of output and slows down the server. Trace flag 1211 should not be used unless you have done extensive study to make sure it helps with performance. These trace flags are used here for illustration and learning purposes only. Lock Ownership Most of the locking discussion in this lesson relates to locks owned by “transactions.” In addition to transaction, cursor and session can be owners of locks and they both affect how long locks are held. For every row that is fetched, when SCROLL_LOCKS option is used, regardless of the state of a transaction, a cursor lock is held until the next row is fetched or when the cursor is closed. Locks owned by session are outside the scope of a transaction. The duration of these locks are bounded by the connection and the process will continue to hold these locks until the process disconnects. A typical lock owned by session is the database (DB) lock. Locking – Read Committed Scan Under read committed isolation level, when database pages are scanned, shared locks are held when the page is read and processed. The shared locks are released “behind” the scan and allow other transactions to update rows. It is important to note that the shared lock currently acquired will not be released until shared lock for the next page is successfully acquired (this is commonly know as “crabbing”). If the same pages are scanned again, rows may be modified or deleted by other transactions. Locking – Repeatable Read Scan Under repeatable read isolation level, when database pages are scanned, shared locks are held when the page is read and processed. SQL Server continues to hold these shared locks, thus preventing other transactions to update rows. If the same pages are scanned again, previously scanned rows will not change but new rows may be added by other transactions. Locking – Serializable Read Scan Under serializable read isolation level, when database pages are scanned, shared locks are held not only on rows but also on scanned key range. SQL Server continues to hold these shared locks until the end of transaction. Because key range locks are held, not only will this prevent other transactions from modifying the rows, no new rows can be inserted. Prefetch and Isolation Level Prefetch and Locking Behavior The prefetch feature is available for use with SQL Server 7.0 and SQL Server 2000. When searching for data using a nonclustered index, the index is searched for a particular value. When that value is found, the index points to the disk address. The traditional approach would be to immediately issue an I/O for that row, given the disk address. The result is one synchronous I/O per row and, at most, one disk at a time working to evaluate the query. This does not take advantage of striped disk sets. The prefetch feature takes a different approach. It continues looking for more record pointers in the nonclustered index. When it has collected a number of them, it provides the storage engine with prefetch hints. These hints tell the storage engine that the query processor will need these particular records soon. The storage engine can now issue several I/Os simultaneously, taking advantage of striped disk sets to execute multiple operations simultaneously. For example, if the engine is scanning a nonclustered index to determine which rows qualify but will eventually need to visit the data page as well to access columns that are not in the index, it may decide to submit asynchronous page read requests for a group of qualifying rows. The prefetch data pages are then revisited later to avoid waiting for each individual page read to complete in a serial fashion. This data access path requires that a lock be held between the prefetch request and the row lookup to stabilize the row on the page so it is not to be moved by a page split or clustered key update. For our example, the isolation level of the query is escalated to REPEATABLE READ, overriding the transaction isolation level. With SQL Server 7.0 and SQL Server 2000, portions of a transaction can execute at a different transaction isolation level than the entire transaction itself. This is implemented as lock classes. Lock classes are used to control lock lifetime when portions of a transaction need to execute at a stricter isolation level than the underlying transaction. Unfortunately, in SQL Server 7.0 and SQL Server 2000, the lock class is created at the topmost operator of the query and hence released only at the end of the query. Currently there is no support to release the lock (lock class) after the row has been discarded or fetched by the filter or join operator. This is because isolation level can be set at the query level via a lock class, but no lower. Because of this, locks acquired during the query will not be released until the query completes. If prefetch is occurring you may see a single SPID that holds hundreds of Shared KEY or PAG locks even though the connection’s isolation level is READ COMMITTED. Isolation level can be determined from DBCC PSS output. For details about this behavior see “SOX001109700040 INF: Queries with PREFETCH in the plan hold lock until the end of transaction”. Other Locking Mechanism Lock manager does not manage latches and spinlocks. Latches Latches are internal mechanisms used to protect pages while doing operations such as placing a row physically on a page, compressing space on a page, or retrieving rows from a page. Latches can roughly be divided into I/O latches and non-I/O latches. If you see a high number of non-I/O related latches, SQL Server is usually doing a large number of hash or sort operations in tempdb. You can monitor latch activities via DBCC SQLPERF(‘WAITSTATS’) command. Spinlock A spinlock is an internal data structure that is used to protect vital information that is shared within SQL Server. On a multi-processor machine, when SQL Server tries to access a particular resource protected by a spinlock, it must first acquire the spinlock. If it fails, it executes a loop that will check to see if the lock is available and if not, decrements a counter. If the counter reaches zero, it yields the processor to another thread and goes into a “sleep” (wait) state for a pre-determined amount of time. When it wakes, hopefully, the lock is free and available. If not, the loop starts again and it is terminated only when the lock is acquired. The reason for implementing a spinlock is that it is probably less costly to “spin” for a short time rather than yielding the processor. Yielding the processor will force an expensive context switch where:  The old thread’s state must be saved  The new thread’s state must be reloaded  The data stored in the L1 and L2 cache are useless to the processor On a single-processor computer, the loop is not useful because no other thread can be running and thus, no one can release the spinlock for the currently executing thread to acquire. In this situation, the thread yields the processor immediately. Lesson 2: Concepts – Batch and Transaction This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Review batch processing and error checking.  Review explicit, implicit and autocommit transactions and transaction nesting level.  Discuss how commit and rollback transaction done in stored procedure and trigger affects transaction nesting level.  Discuss various transaction isolation level and their impact on locking.  Discuss the difference between aborting a statement, a transaction, and a batch.  Describe how @@error, @@transcount, and @@rowcount can be used for error checking and handling. Recommended Reading  Charter 12 “Transactions and Triggers”, Inside SQL Server 2000 by Kalen Delaney Batch Definition SQL Profiler Statements and Batches To help further your understanding of what is a batch and what is a statement, you can use SQL Profiler to study the definition of batch and statement.  Try This: Using SQL Profiler to Analyze Batch 1. Log on to a server with Query Analyzer 2. Startup the SQL Profiler against the same server 3. Start a trace using the “StandardSQLProfiler” template 4. Execute the following using Query Analyzer: SELECT @@VERSION SELECT @@SPID The ‘SQL:BatchCompleted’ event is captured by the trace. It shows both the statements as a single batch. 5. Now execute the following using Query Analyzer {call sp_who()} What shows up? The ‘RPC:Completed’ with the sp_who information. RPC is simply another entry point to the SQL Server to call stored procedures with native data types. This allows one to avoid parsing. The ‘RPC:Completed’ event should be considered the same as a batch for the purposes of this discussion. Stop the current trace and start a new trace using the “SQLProfilerTSQL_SPs” template. Issue the same command as outlines in step 5 above. Looking at the output, not only can you see the batch markers but each statement as executed within the batch. Autocommit, Explicit, and Implicit Transaction Autocommit Transaction Mode (Default) Autocommit mode is the default transaction management mode of SQL Server. Every Transact-SQL statement, whether it is a standalone statement or part of a batch, is committed or rolled back when it completes. If a statement completes successfully, it is committed; if it encounters any error, it is rolled back. A SQL Server connection operates in autocommit mode whenever this default mode has not been overridden by either explicit or implicit transactions. Autocommit mode is also the default mode for ADO, OLE DB, ODBC, and DB-Library. A SQL Server connection operates in autocommit mode until a BEGIN TRANSACTION statement starts an explicit transaction, or implicit transaction mode is set on. When the explicit transaction is committed or rolled back, or when implicit transaction mode is turned off, SQL Server returns to autocommit mode. Explicit Transaction Mode An explicit transaction is a transaction that starts with a BEGIN TRANSACTION statement. An explicit transaction can contain one or more statements and must be terminated by either a COMMIT TRANSACTION or a ROLLBACK TRANSACTION statement. Implicit Transaction Mode SQL Server can automatically or, more precisely, implicitly start a transaction for you if a SET IMPLICIT_TRANSACTIONS ON statement is run or if the implicit transaction option is turned on globally by running sp_configure ‘user options’ 2. (Actually, the bit mask 0x2 must be turned on for the user option so you might have to perform an ‘OR’ operation with the existing user option value.) See SQL Server 2000 Books Online on how to turn on implicit transaction under ODBC and OLE DB (acdata.chm::/ac_8_md_06_2g6r.htm). Transaction Nesting Explicit transactions can be nested. Committing inner transactions is ignored by SQL Server other than to decrements @@TRANCOUNT. The transaction is either committed or rolled back based on the action taken at the end of the outermost transaction. If the outer transaction is committed, the inner nested transactions are also committed. If the outer transaction is rolled back, then all inner transactions are also rolled back, regardless of whether the inner transactions were individually committed. Each call to COMMIT TRANSACTION applies to the last executed BEGIN TRANSACTION. If the BEGIN TRANSACTION statements are nested, then a COMMIT statement applies only to the last nested transaction, which is the innermost transaction. Even if a COMMIT TRANSACTION transaction_name statement within a nested transaction refers to the transaction name of the outer transaction, the commit applies only to the innermost transaction. If a ROLLBACK TRANSACTION statement without a transaction_name parameter is executed at any level of a set of nested transaction, it rolls back all the nested transactions, including the outermost transaction. The @@TRANCOUNT function records the current transaction nesting level. Each BEGIN TRANSACTION statement increments @@TRANCOUNT by one. Each COMMIT TRANSACTION statement decrements @@TRANCOUNT by one. A ROLLBACK TRANSACTION statement that does not have a transaction name rolls back all nested transactions and decrements @@TRANCOUNT to 0. A ROLLBACK TRANSACTION that uses the transaction name of the outermost transaction in a set of nested transactions rolls back all the nested transactions and decrements @@TRANCOUNT to 0. When you are unsure if you are already in a transaction, SELECT @@TRANCOUNT to determine whether it is 1 or more. If @@TRANCOUNT is 0 you are not in a transaction. You can also find the transaction nesting level by checking the sysprocess.open_tran column. See SQL Server 2000 Books Online topic “Nesting Transactions” (acdata.chm::/ac_8_md_06_66nq.htm) for more information. Statement, Transaction, and Batch Abort One batch can have many statements and one transaction can have multiple statements, also. One transaction can span multiple batches and one batch can have multiple transactions. Statement Abort Currently executing statement is aborted. This can be a bit confusing when you start talking about statements in a trigger or stored procedure. Let us look closely at the following trigger: CREATE TRIGGER TRG8134 ON TBL8134 AFTER INSERT AS BEGIN SELECT 1/0 SELECT 'Next command in trigger' END To fire the INSERT trigger, the batch could be as simple as ‘INSERT INTO TBL8134 VALUES(1)’. However, the trigger contains two statements that must be executed as part of the batch to satisfy the clients insert request. When the ‘SELECT 1/0’ causes the divide by zero error, a statement abort is issued for the ‘SELECT 1/0’ statement. Batch and Transaction Abort On SQL Server 2000 (and SQL Server 7.0) whenever a non-informational error is encountered in a trigger, the statement abort is promoted to a batch and transactional abort. Thus, in the example the statement abort for ‘select 1/0’ promotion results in an entire batch abort. No further statements in the trigger or batch will be executed and a rollback is issued. On SQL Server 6.5, the statement aborts immediately and results in a transaction abort. However, the rest of the statements within the trigger are executed. This trigger could return ‘Next command in trigger’ as a result set. Once the trigger completes the batch abort promotion takes effect. Conversely, submitting a similar set of statements in a standalone batch can result in different behavior. SELECT 1/0 SELECT 'Next command in batch' Not considering the set option possibilities, a divide by zero error generally results in a statement abort. Since it is not in a trigger, the promotion to a batch abort is avoided and subsequent SELECT statement can execute. The programmer should add an “if @@ERROR” check immediately after the ‘select 1/0’ to T-SQL execution to control the flow correctly. Aborting and Set Options ARITHABORT If SET ARITHABORT is ON, these error conditions cause the query or batch to terminate. If the errors occur in a transaction, the transaction is rolled back. If SET ARITHABORT is OFF and one of these errors occurs, a warning message is displayed, and NULL is assigned to the result of the arithmetic operation. When an INSERT, DELETE, or UPDATE statement encounters an arithmetic error (overflow, divide-by-zero, or a domain error) during expression evaluation when SET ARITHABORT is OFF, SQL Server inserts or updates a NULL value. If the target column is not nullable, the insert or update action fails and the user receives an error. XACT_ABORT When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is terminated and rolled back. When OFF, only the Transact-SQL statement that raised the error is rolled back and the transaction continues processing. Compile errors, such as syntax errors, are not affected by SET XACT_ABORT. For example: CREATE TABLE t1 (a int PRIMARY KEY) CREATE TABLE t2 (a int REFERENCES t1(a)) GO INSERT INTO t1 VALUES (1) INSERT INTO t1 VALUES (3) INSERT INTO t1 VALUES (4) INSERT INTO t1 VALUES (6) GO SET XACT_ABORT OFF GO BEGIN TRAN INSERT INTO t2 VALUES (1) INSERT INTO t2 VALUES (2) /* Foreign key error */ INSERT INTO t2 VALUES (3) COMMIT TRAN SELECT 'Continue running batch 1...' GO SET XACT_ABORT ON GO BEGIN TRAN INSERT INTO t2 VALUES (4) INSERT INTO t2 VALUES (5) /* Foreign key error */ INSERT INTO t2 VALUES (6) COMMIT TRAN SELECT 'Continue running batch 2...' GO /* Select shows only keys 1 and 3 added. Key 2 insert failed and was rolled back, but XACT_ABORT was OFF and rest of transaction succeeded. Key 5 insert error with XACT_ABORT ON caused all of the second transaction to roll back. Also note that 'Continue running batch 2...' is not Returned to indicate that the batch is aborted. */ SELECT * FROM t2 GO DROP TABLE t2 DROP TABLE t1 GO Compile and Run-time Errors Compile Errors Compile errors are encountered during syntax checks, security checks, and other general operations to prepare the batch for execution. These errors can prevent the optimization of the query and thus lead to immediate abort. The statement is not run and the batch is aborted. The transaction state is generally left untouched. For example, assume there are four statements in a particular batch. If the third statement has a syntax error, none of the statements in the batch is executed. Optimization Errors Optimization errors would include rare situations where the statement encounters a problem when attempting to build an optimal execution plan. Example: “too many tables referenced in the query” error is reported because a “work table” was added to the plan. Runtime Errors Runtime errors are those that are encountered during the execution of the query. Consider the following batch: SELECT * FROM pubs.dbo.titles UPDATE pubs.dbo.authors SET au_lname = au_lname SELECT * FROM foo UPDATE pubs.dbo.authors SET au_lname = au_lname If you run the above statements in a batch, the first two statements will be executed, the third statement will fail because table foo does not exist, and the batch will terminate. Deferred Name Resolution is the feature that allows this batch to start executing before resolving the object foo. This feature allows SQL Server to delay object resolution and place a “placeholder” in the query’s execution. The object referenced by the placeholder is resolved until the query is executed. In our example, the execution of the statement “SELECT * FROM foo” will trigger another compile process to resolve the name again. This time, error message 208 is returned. Error: 208, Level 16, State 1, Line 1 Invalid object name 'foo'. Message 208 can be encountered as a runtime or compile error depending on whether the Deferred Name Resolution feature is available. In SQL Server 6.5 this would be considered a compile error and on SQL Server 2000 (and SQL Server7.0) as a runtime error due to Deferred Name Resolution. In the following example, if a trigger referenced authors2, the error is detected as SQL Server attempts to execute the trigger. However, under SQL Server 6.5 the create trigger statement fails because authors2 does not exist at compile time. When errors are encountered in a trigger, generally, the statement, batch, and transaction are aborted. You should be able to observe this by running the following script in pubs database: Create table tblTest(iID int) go create trigger trgInsert on tblTest for INSERT as begin select * from authors select * from authors2 select * from titles end go begin tran select 'Before' insert into tblTest values(1) select 'After' go select @@TRANCOUNT go When run in a batch, the statement and the batch are aborted but the transaction remains active. The follow script illustrates this: begin tran select 'Before' select * from authors2 select 'After' go select @@TRANCOUNT go One other factor in a compile versus runtime error is implicit data type conversions. If you were to run the following statements on SQL Server 6.5 and SQL Server 2000 (and SQL Server 7.0): create table tblData(dtData datetime) go select 1 insert into tblData values(12/13/99) go On SQL Server 6.5, you get an error before execution of the batch begins so no statements are executed and the batch is aborted. Error: 206, Level 16, State 2, Line 2 Operand type clash: int is incompatible with datetime On SQL Server 2000, you get the default value (1900-01-01 00:00:00.000) inserted into the table. SQL Server 2000 implicit data type conversion treats this as integer division. The integer division of 12/13/99 is 0, so the default date and time value is inserted, no error returned. To correct the problem on either version is to wrap the date string with quotes. See Bug #56118 (sqlbug_70) for more details about this situation. Another example of a runtime error is a 605 message. Error: 605 Attempt to fetch logical page %S_PGID in database '%.*ls' belongs to object '%.*ls', not to object '%.*ls'. A 605 error is always a runtime error. However, depending on the transaction isolation level, (e.g. using the NOLOCK lock hint), established by the SPID the handling of the error can vary. Specifically, a 605 error is considered an ACCESS error. Errors associated with buffer and page access are found in the 600 series of errors. When the error is encountered, the isolation level of the SPID is examined to determine proper handling based on information or fatal error level. Transaction Error Checking Not all errors cause transactions to automatically rollback. Although it is difficult to determine exactly which errors will rollback transactions and which errors will not, the main idea here is that programmers must perform error checking and handle errors appropriately. Error Handling Raiserror Details Raiserror seems to be a source of confusion but is really rather simple. Raiserror with severity levels of 20 or higher will terminate the connection. Of course, when the connection is terminated a full rollback of any open transaction will immediately be instantiated by the SQL Server (except distributed transaction with DTC involved). Severity levels lower than 20 will simply result in the error message being returned to the client. They do not affect the transaction scope of the connection. Consider the following batch: use pubs begin tran update authors set au_lname = 'smith' raiserror ('This is bad', 19, 1) with log select @@trancount With severity set at 19, the 'select @@trancount' will be executed after the raiserror statement and will return a value of 1. If severity is changed to 20, then the select statement will not run and the connection is broken. Important Error handling must occur not only in T-SQL batches and stored procedures, but also in application program code. Transactions and Triggers (1 of 2) Basic behavior assumes the implicit transactions setting is set to OFF. This behavior makes it possible to identify business logic errors in a trigger, raise an error, rollback the action, and add an audit table entry. Logically, the insert to the audit table cannot take place before the ROLLBACK action and you would not want to build in the audit table insert into every applications error handler that violated the business rule of the trigger. For more information, see also… SQL Server 2000 Books Online topic “Rollbacks in stored procedure and triggers“ (acdata.chm::/ac_8_md_06_4qcz.htm) IMPLICIT_TRANSACTIONS ON Behavior The behavior of firing other triggers on the same table can be tricky. Say you added a trigger that checks the CODE field. Read only versions of the rows contain the code ‘RO’ and read/write versions use ‘RW.’ Whenever someone tries to delete a row with a code ‘RO’ the trigger issues the rollback and logs an audit table entry. However, you also have a second trigger that is responsible for cascading delete operations. One client could issue the delete without implicit transactions on and only the current trigger would execute and then terminate the batch. However, a second client with implicit transactions on could issue the same delete and the secondary trigger would fire. You end up with a situation in which the cascading delete operations can take place (are committed) but the initial row remains in the table because of the rollback operation. None of the delete operations should be allowed but because the transaction scope was restarted because of the implicit transactions setting, they did. Transactions and Triggers (2 of 2) It is extremely difficult to determine the execution state of a trigger when using explicit rollback statements in combination with implicit transactions. The RETURN statement is not allowed to return a value. The only way I have found to set the @@ERROR is using a ‘raiserror’ as the last execution statement in the last trigger to execute. If you modify the example, this following RAISERROR statement will set @@ERROR to 50000: CREATE TRIGGER trgTest on tblTest for INSERT AS BEGIN ROLLBACK INSERT INTO tblAudit VALUES (1) RAISERROR('This is bad', 14,1) END However, this value does not carry over to a secondary trigger for the same table. If you raise an error at the end of the first trigger and then look at @@ERROR in the secondary trigger the @@ERROR remains 0. Carrying Forward an Active/Open Transaction It is possible to exit from a trigger and carry forward an open transaction by issuing a BEGIN TRAN or by setting implicit transaction on and doing INSERT, UPDATE, or DELETE. Warning It is never recommended that a trigger call BEGIN TRANSACTION. By doing this you increment the transaction count. Invalid code logic, not calling commit transaction, can lead to a situation where the transaction count remains elevated upon exit of the trigger. Transaction Count The behavior is better explained by understanding how the server works. It does not matter whether you are in a transaction, when a modification takes place the transaction count is incremented. So, in the simplest form, during the processing of an insert the transaction count is 1. On completion of the insert, the server will commit (and thus decrement the transaction count). If the commit identifies the transaction count has returned to 0, the actual commit processing is completed. Issuing a commit when the transaction count is greater than 1 simply decrements the nested transaction counter. Thus, when we enter a trigger, the transaction count is 1. At the completion of the trigger, the transaction count will be 0 due to the commit issued at the end of the modification statement (insert). In our example, if the connection was already in a transaction and called the second INSERT, since implicit transaction is ON, the transaction count in the trigger will be 2 as long as the ROLLBACK is not executed. At the end of the insert, the commit is again issued to decrement the transaction reference count to 1. However, the value does not return to 0 so the transaction remains open/active. Subsequent triggers are only fired if the transaction count at the end of the trigger remains greater than or equal to 1. The key to continuation of secondary triggers and the batch is the transaction count at the end of a trigger execution. If the trigger that performs a rollback has done an explicit begin transaction or uses implicit transactions, subsequent triggers and the batch will continue. If the transaction count is not 1 or greater, subsequent triggers and the batch will not execute. Warning Forcing the transaction count after issuing a rollback is dangerous because you can easily loose track of your transaction nesting level. When performing an explicit rollback in a trigger, you should immediately issue a return statement to maintain consistent behavior between a connection with and without implicit transaction settings. This will force the trigger(s) and batch to terminate immediately. One of the methods of dealing with this issue is to run ‘SET IMPLICIT_TRANSACTIONS OFF’ as the first statement of any trigger. Other methods may entails checking @@TRANCOUNT at the end of the trigger and continue to COMMIT the transaction as long as @@TRANCOUNT is greater than 1. Examples The following examples are based on this table: create table tbl50000Insert (iID int NOT NULL) go Note If more than one trigger is used, to guarantee the trigger firing sequence, the sp_settriggerorder command should be used. This command is omitted in these examples to simplify the complexity of the statements. First Example In the first example, the second trigger was never fired and the batch, starting with the insert statement, was aborted. Thus, the print statement was never issued. print('Trigger issues rollback - cancels batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran select 'End of trigger', @@TRANCOUNT as 'TRANCOUNT' end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(1) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Second Example The next example shows that since a new transaction is started, the second trigger will be fired and the print statement in the batch will be executed. Note that the insert is rolled back. print('Trigger issues rollback - increases tran count to continue batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select 'In Trigger2' select 'Trigger 2 Inserted', * from inserted end go insert into tbl50000Insert values(2) print('---------------------- In same batch') select * from tbl50000Insert go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Third Example In the third example, the raiserror statement is used to set the @@ERROR value and the BEGIN TRAN statement is used in the trigger to allow the batch to continue to run. print('Trigger issues rollback - uses raiserror to set @@ERROR') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback tran begin tran -- Increase @@trancount to allow -- batch to continue select @@trancount as ‘Trancount’ raiserror('This is from the trigger', 14,1) end go insert into tbl50000Insert values(3) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert go delete from tbl50000Insert Fourth Example For the fourth example, a second trigger is added to illustrate the fact that @@ERROR value set in the first trigger will not be seen in the second trigger nor will it show up in the batch after the second trigger is fired. print('Trigger issues rollback - uses raiserror to set @@ERROR, not seen in second trigger and cleared in batch') go create trigger trg50000Insert on tbl50000Insert for INSERT as begin select 'Inserted', * from inserted rollback begin tran -- Increase @@trancount to -- allow batch to continue select @@TRANCOUNT as 'Trancount' raiserror('This is from the trigger', 14,1) end go create trigger trg50000Insert2 on tbl50000Insert for INSERT as begin select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' end go insert into tbl50000Insert values(4) select @@ERROR as 'ERROR', @@TRANCOUNT as 'Trancount' go -- Cleanup drop trigger trg50000Insert drop trigger trg50000Insert2 go delete from tbl50000Insert Lesson 3: Concepts – Locks and Applications This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Explain how lock hints are used and their impact.  Discuss the effect on locking when an application uses Microsoft Transaction Server.  Identify the different kinds of deadlocks including distributed deadlock. Recommended Reading  Charter 14 “Locking”, Inside SQL Server 2000 by Kalen Delaney  Charter 16 “Query Tuning”, Inside SQL Server 2000 by Kalen Delaney Q239753 – Deadlock Situation Not Detected by SQL Server Q288752 – Blocked SPID Not Participating in Deadlock May Incorrectly be Chosen as victim Locking Hints UPDLOCK If update locks are used instead of shared locks while reading a table, the locks are held until the end of the statement or transaction. UPDLOCK has the advantage of allowing you to read data (without blocking other readers) and update it later with the assurance that the data has not changed since you last read it. READPAST READPAST is an optimizer hint for use with SELECT statements. When this hint is used, SQL Server will read past locked rows. For example, assume table T1 contains a single integer column with the values of 1, 2, 3, 4, and 5. If transaction A changes the value of 3 to 8 but has not yet committed, a SELECT * FROM T1 (READPAST) yields values 1, 2, 4, 5. Tip READPAST only applies to transactions operating at READ COMMITTED isolation and only reads past row-level locks. This lock hint can be used to implement a work queue on a SQL Server table. For example, assume there are many external work requests being thrown into a table and they should be serviced in approximate insertion order but they do not have to be completely FIFO. If you have 4 worker threads consuming work items from the queue they could each pick up a record using read past locking and then delete the entry from the queue and commit when they're done. If they fail, they could rollback, leaving the entry on the queue for the next worker thread to pick up. Caution The READPAST hint is not compatible with HOLDLOCK.  Try This: Using Locking Hints 1. Open a Query Window and connect to the pubs database. 2. Execute the following statements (--Conn 1 is optional to help you keep track of each connection): BEGIN TRANSACTION -- Conn 1 UPDATE titles SET price = price * 0.9 WHERE title_id = 'BU1032' 3. Open a second connection and execute the following statements: SELECT @@lock_timeout -- Conn 2 GO SELECT * FROM titles SELECT * FROM authors 4. Open a third connection and execute the following statements: SET LOCK_TIMEOUT 0 -- Conn 3 SELECT * FROM titles SELECT * FROM authors 5. Open a fourth connection and execute the following statement: SELECT * FROM titles (READPAST) -- Conn 4 WHERE title_ID < 'C' SELECT * FROM authors How many records were returned? 3 6. Open a fifth connection and execute the following statement: SELECT * FROM titles (NOLOCK) -- Conn 5 WHERE title_ID 0 the lock manager also checks for deadlocks every time a SPID gets blocked. So a single deadlock will trigger 20 seconds of more immediate deadlock detection, but if no additional deadlocks occur in that 20 seconds, the lock manager no longer checks for deadlocks at each block and detection again only happens every 5 seconds. Although normally not needed, you may use trace flag -T1205 to trace the deadlock detection process. Note Please note the distinction between application lock and other locks’ deadlock detection. For application lock, we do not rollback the transaction of the deadlock victim but simply return a -3 to sp_getapplock, which the application needs to handle itself. Deadlock Resolution How is a deadlock resolved? SQL Server picks one of the connections as a deadlock victim. The victim is chosen based on either which is the least expensive transaction (calculated using the number and size of the log records) to roll back or in which process “SET DEADLOCK_PRIORITY LOW” is specified. The victim’s transaction is rolled back, held locks are released, and SQL Server sends error 1205 to the victim’s client application to notify it that it was chosen as a victim. The other process can then obtain access to the resource it was waiting on and continue. Error 1205: Your transaction (process ID #%d) was deadlocked with another process and has been chosen as the deadlock victim. Rerun your transaction. Symptoms of deadlocking Error 1205 usually is not written to the SQL Server errorlog. Unfortunately, you cannot use sp_altermessage to cause 1205 to be written to the errorlog. If the client application does not capture and display error 1205, some of the symptoms of deadlock occurring are:  Clients complain of mysteriously canceled queries when using certain features of an application.  May be accompanied by excessive blocking. Lock contention increases the chances that a deadlock will occur. Triggers and Deadlock Triggers promote the deadlock priority of the SPID for the life of the trigger execution when the DEADLOCK PRIORITY is not set to low. When a statement in a trigger causes a deadlock to occur, the SPID executing the trigger is given preferential treatment and will not become the victim. Warning Bug 235794 is filed against SQL Server 2000 where a blocked SPID that is not a participant of a deadlock may incorrectly be chosen as a deadlock victim if the SPID is blocked by one of the deadlock participants and the SPID has the least amount of transaction logging. See KB article Q288752: “Blocked Spid Not Participating in Deadlock May Incorrectly be Chosen as victim” for more information. Distributed Deadlock – Scenario 1 Distributed Deadlocks The term distributed deadlock is ambiguous. There are many types of distributed deadlocks. Scenario 1 Client application opens connection A, begins a transaction, acquires some locks, opens connection B, connection B gets blocked by A but the application is designed to not commit A’s transaction until B completes. Note SQL Server has no way of knowing that connection A is somehow dependent on B – they are two distinct connections with two distinct transactions. This situation is discussed in scenario #4 in “Q224453 INF: Understanding and Resolving SQL Server 7.0 Blocking Problems”. Distributed Deadlock – Scenario 2 Scenario 2 Distributed deadlock involving bound connections. Two connections can be bound into a single transaction context with sp_getbindtoken/sp_bindsession or via DTC. Spid 60 enlists in a transaction with spid 61. A third spid 62 is blocked by spid 60, but spid 61 is blocked by spid 62. Because they are doing work in the same transaction, spid 60 cannot commit until spid 61 finishes his work, but spid 61 is blocked by 62 who is blocked by 60. This scenario is described in article “Q239753 - Deadlock Situation Not Detected by SQL Server.” Note SQL Server 6.5 and 7.0 do not detect this deadlock. The SQL Server 2000 deadlock detection algorithm has been enhanced to detect this type of distributed deadlock. The diagram in the slide illustrates this situation. Resources locked by a spid are below that spid (in a box). Arrows indicate blocking and are drawn from the blocked spid to the resource that the spid requires. A circle represents a transaction; spids in the same transaction are shown in the same circle. Distributed Deadlock – Scenario 3 Scenario 3 Distributed deadlock involving linked servers or server-to-server RPC. Spid 60 on Server 1 executes a stored procedure on Server 2 via linked server. This stored procedure does a loopback linked server query against a table on Server 1, and this connection is blocked by a lock held by Spid 60. Note No version of SQL Server is currently designed to detect this distributed deadlock. Lesson 4: Information Collection and Analysis This lesson outlines some of the common causes that contribute to the perception of a slow server. What You Will Learn After completing this lesson, you will be able to:  Identify specific information needed for troubleshooting issues.  Locate and collect information needed for troubleshooting issues.  Analyze output of DBCC Inputbuffer, DBCC PSS, and DBCC Page commands.  Review information collected from master.dbo.sysprocesses table.  Review information collected from master.dbo.syslockinfo table.  Review output of sp_who, sp_who2, sp_lock.  Analyze Profiler log for query usage pattern.  Review output of trace flags to help troubleshoot deadlocks. Recommended Reading Q244455 - INF: Definition of Sysprocesses Waittype and Lastwaittype Fields Q244456 - INF: Description of DBCC PSS Command for SQL Server 7.0 Q271509 - INF: How to Monitor SQL Server 2000 Blocking Q251004 - How to Monitor SQL Server 7.0 Blocking Q224453 - Understanding and Resolving SQL Server 7.0 Blocking Problem Q282749 – BUG: Deadlock information reported with SQL Server 2000 Profiler Locking and Blocking  Try This: Examine Blocked Processes 1. Open a Query Window and connect to the pubs database. Execute the following statements: BEGIN TRAN -- connection 1 UPDATE titles SET price = price + 1 2. Open another connection and execute the following statement: SELECT * FROM titles-- connection 2 3. Open a third connection and execute sp_who; note the process id (spid) of the blocked process. (Connection 3) 4. In the same connection, execute the following: SELECT spid, cmd, waittype FROM master..sysprocesses WHERE waittype 0 -- connection 3 5. Do not close any of the connections! What was the wait type of the blocked process?  Try This: Look at locks held Assumes all your connections are still open from the previous exercise. • Execute sp_lock -- Connection 3 What locks is the process from the previous example holding? Make sure you run ROLLBACK TRAN in Connection 1 to clean up your transaction. Collecting Information See Module 2 for more about how to gather this information using various tools. Recognizing Blocking Problems How to Recognize Blocking Problems  Users complain about poor performance at a certain time of day, or after a certain number of users connect.  SELECT * FROM sysprocesses or sp_who2 shows non-zero values in the blocked or BlkBy column.  More severe blocking incidents will have long blocking chains or large sysprocesses.waittime values for blocked spids.  Possibl

2009-11-27

微软内部资料-SQL性能优化2

Contents Module Overview 1 Lesson 1: Memory 3 Lesson 2: I/O 73 Lesson 3: CPU 111 Module 3: Troubleshooting Server Performance Module Overview Troubleshooting server performance-based support calls requires product knowledge, good communication skills, and a proven troubleshooting methodology. In this module we will discuss Microsoft® SQL Server™ interaction with the operating system and methodology of troubleshooting server-based problems. At the end of this module, you will be able to:  Define the common terms associated the memory, I/O, and CPU subsystems.  Describe how SQL Server leverages the Microsoft Windows® operating system facilities including memory, I/O, and threading.  Define common SQL Server memory, I/O, and processor terms.  Generate a hypothesis based on performance counters captured by System Monitor.  For each hypothesis generated, identify at least two other non-System Monitor pieces of information that would help to confirm or reject your hypothesis.  Identify at least five counters for each subsystem that are key to understanding the performance of that subsystem.  Identify three common myths associated with the memory, I/O, or CPU subsystems. Lesson 1: Memory What You Will Learn After completing this lesson, you will be able to:  Define common terms used when describing memory.  Give examples of each memory concept and how it applies to SQL Server.  Describe how SQL Server user and manages its memory.  List the primary configuration options that affect memory.  Describe how configuration options affect memory usage.  Describe the effect on the I/O subsystem when memory runs low.  List at least two memory myths and why they are not true. Recommended Reading  SQL Server 7.0 Performance Tuning Technical Reference, Microsoft Press  Windows 2000 Resource Kit companion CD-ROM documentation. Chapter 15: Overview of Performance Monitoring  Inside Microsoft Windows 2000, Third Edition, David A. Solomon and Mark E. Russinovich  Windows 2000 Server Operations Guide, Storage, File Systems, and Printing; Chapters: Evaluating Memory and Cache Usage  Advanced Windows, 4th Edition, Jeffrey Richter, Microsoft Press Related Web Sites  http://ntperformance/ Memory Definitions Memory Definitions Before we look at how SQL Server uses and manages its memory, we need to ensure a full understanding of the more common memory related terms. The following definitions will help you understand how SQL Server interacts with the operating system when allocating and using memory. Virtual Address Space A set of memory addresses that are mapped to physical memory addresses by the system. In a 32-bit operation system, there is normally a linear array of 2^32 addresses representing 4,294,967,269 byte addresses. Physical Memory A series of physical locations, with unique addresses, that can be used to store instructions or data. AWE – Address Windowing Extensions A 32-bit process is normally limited to addressing 2 gigabytes (GB) of memory, or 3 GB if the system was booted using the /3G boot switch even if there is more physical memory available. By leveraging the Address Windowing Extensions API, an application can create a fixed-size window into the additional physical memory. This allows a process to access any portion of the physical memory by mapping it into the applications window. When used in combination with Intel’s Physical Addressing Extensions (PAE) on Windows 2000, an AWE enabled application can support up to 64 GB of memory Reserved Memory Pages in a processes address space are free, reserved or committed. Reserving memory address space is a way to reserve a range of virtual addresses for later use. If you attempt to access a reserved address that has not yet been committed (backed by memory or disk) you will cause an access violation. Committed Memory Committed pages are those pages that when accessed in the end translate to pages in memory. Those pages may however have to be faulted in from a page file or memory mapped file. Backing Store Backing store is the physical representation of a memory address. Page Fault (Soft/Hard) A reference to an invalid page (a page that is not in your working set) is referred to as a page fault. Assuming the page reference does not result in an access violation, a page fault can be either hard or soft. A hard page fault results in a read from disk, either a page file or memory-mapped file. A soft page fault is resolved from one of the modified, standby, free or zero page transition lists. Paging is represented by a number of counters including page faults/sec, page input/sec and page output/sec. Page faults/sec include soft and hard page faults where as the page input/output counters represent hard page faults. Unfortunately, all of these counters include file system cache activity. For more information, see also…Inside Windows 2000,Third Edition, pp. 443-451. Private Bytes Private non-shared committed address space Working Set The subset of processes virtual pages that is resident in physical memory. For more information, see also… Inside Windows 2000,Third Edition, p. 455. System Working Set Like a process, the system has a working set. Five different types of pages represent the system’s working set: system cache; paged pool; pageable code and data in the kernel; page-able code and data in device drivers; and system mapped views. The system working set is represented by the counter Memory: cache bytes. System working set paging activity can be viewed by monitoring the Memory: Cache Faults/sec counter. For more information, see also… Inside Windows 2000,Third Edition, p. 463. System Cache The Windows 2000 cache manager provides data caching for both local and network file system drivers. By caching virtual blocks, the cache manager can reduce disk I/O and provide intelligent read ahead. Represented by Memory:Cache Resident bytes. For more information, see also… Inside Windows 2000,Third Edition, pp. 654-659. Non Paged Pool Range of addresses guaranteed to be resident in physical memory. As such, non-paged pool can be accessed at any time without incurring a page fault. Because device drivers operate at DPC/dispatch level (covered in lesson 2), and page faults are not allowed at this level or above, most device drivers use non-paged pool to assure that they do not incur a page fault. Represented by Memory: Pool Nonpaged Bytes, typically between 3-30 megabytes (MB) in size. Note The pool is, in effect, a common area of memory shared by all processes. One of the most common uses of non-paged pool is the storage of object handles. For more information regarding “maximums,” see also… Inside Windows 2000,Third Edition, pp. 403-404 Paged Pool Range of address that can be paged in and out of physical memory. Typically used by drivers who need memory but do not need to access that memory from DPC/dispatch of above interrupt level. Represented by Memory: Pool Paged Bytes and Memory:Pool Paged Resident Bytes. Typically between 10-30MB + size of Registry. For more information regarding “limits,” see also… Inside Windows 2000,Third Edition, pp. 403-404. Stack Each thread has two stacks, one for kernel mode and one for user mode. A stack is an area of memory in which program procedure or function call addresses and parameters are temporarily stored. In Process To run in the same address space. In-process servers are loaded in the client’s address space because they are implemented as DLLs. The main advantage of running in-process is that the system usually does not need to perform a context switch. The disadvantage to running in-process is that DLL has access to the process address space and can potentially cause problems. Out of Process To run outside the calling processes address space. OLEDB providers can run in-process or out of process. When running out of process, they run under the context of DLLHOST.EXE. Memory Leak To reserve or commit memory and unintentionally not release it when it is no longer being used. A process can leak resources such as process memory, pool memory, user and GDI objects, handles, threads, and so on. Memory Concepts (X86 Address Space) Per Process Address Space Every process has its own private virtual address space. For 32-bit processes, that address space is 4 GB, based on a 32-bit pointer. Each process’s virtual address space is split into user and system partitions based on the underlying operating system. The diagram included at the top represents the address partitioning for the 32-bit version of Windows 2000. Typically, the process address space is evenly divided into two 2-GB regions. Each process has access to 2 GB of the 4 GB address space. The upper 2 GB of address space is reserved for the system. The user address space is where application code, global variables, per-thread stacks, and DLL code would reside. The system address space is where the kernel, executive, HAL, boot drivers, page tables, pool, and system cache reside. For specific information regarding address space layout, refer to Inside Microsoft Windows 2000 Third Edition pages 417-428 by Microsoft Press. Access Modes Each virtual memory address is tagged as to what access mode the processor must be running in. System space can only be accessed while in kernel mode, while user space is accessible in user mode. This protects system space from being tampered with by user mode code. Shared System Space Although every process has its own private memory space, kernel mode code and drivers share system space. Windows 2000 does not provide any protection to private memory being use by components running in kernel mode. As such, it is very important to ensure components running in kernel mode are thoroughly tested. 3-GB Address Space 3-GB Address Space Although 2 GB of address space may seem like a large amount of memory, application such as SQL Server could leverage more memory if it were available. The boot.ini option /3GB was created for those cases where systems actually support greater than 2 GB of physical memory and an application can make use of it This capability allows memory intensive applications running on Windows 2000 Advanced Server to use up to 50 percent more virtual memory on Intel-based computers. Application memory tuning provides more of the computer's virtual memory to applications by providing less virtual memory to the operating system. Although a system having less than 2 GB of physical memory can be booted using the /3G switch, in most cases this is ill-advised. If you restart with the 3 GB switch, also known as 4-Gig Tuning, the amount of non-paged pool is reduced to 128 MB from 256 MB. For a process to access 3 GB of address space, the executable image must have been linked with the /LARGEADDRESSAWARE flag or modified using Imagecfg.exe. It should be pointed out that SQL Server was linked using the /LAREGEADDRESSAWARE flag and can leverage 3 GB when enabled. Note Even though you can boot Windows 2000 Professional or Windows 2000 Server with the /3GB boot option, users processes are still limited to 2 GB of address space even if the IMAGE_FILE_LARGE_ADDRESS_AWARE flag is set in the image. The only thing accomplished by using the /3G option on these system is the reduction in the amount of address space available to the system (ISW2K Pg. 418). Important If you use /3GB in conjunction with AWE/PAE you are limited to 16 GB of memory. For more information, see the following Knowledge Base articles: Q171793 Information on Application Use of 4GT RAM Tuning Q126402 PagedPoolSize and NonPagedPoolSize Values in Windows NT Q247904 How to Configure Paged Pool and System PTE Memory Areas Q274598 W2K Does Not Enable Complete Memory Dumps Between 2 & 4 GB AWE Memory Layout AWE Memory Usually, the operation system is limited to 4 GB of physical memory. However, by leveraging PAE, Windows 2000 Advanced Server can support up to 8 GB of memory, and Data Center 64 GB of memory. However, as stated previously, each 32-bit process normally has access to only 2 GB of address space, or 3 GB if the system was booted with the /3-GB option. To allow processes to allocate more physical memory than can be represented in the 2GB of address space, Microsoft created the Address Windows Extensions (AWE). These extensions allow for the allocation and use of up to the amount of physical memory supported by the operating system. By leveraging the Address Windowing Extensions API, an application can create a fixed-size window into the physical memory. This allows a process to access any portion of the physical memory by mapping regions of physical memory in and out of the applications window. The allocation and use of AWE memory is accomplished by  Creating a window via VirtualAlloc using the MEM_PHYSICAL option  Allocating the physical pages through AllocateUserPhysicalPages  Mapping the RAM pages to the window using MapUserPhysicalPages Note SQL Server 7.0 supports a feature called extended memory in Windows NT® 4 Enterprise Edition by using a PSE36 driver. Currently there are no PSE drivers for Windows 2000. The preferred method of accessing extended memory is via the Physical Addressing Extensions using AWE. The AWE mapping feature is much more efficient than the older process of coping buffers from extended memory into the process address space. Unfortunately, SQL Server 7.0 cannot leverage PAE/AWE. Because there are currently no PSE36 drivers for Windows 2000 this means SQL Server 7.0 cannot support more than 3GB of memory on Windows 2000. Refer to KB article Q278466. AWE restrictions  The process must have Lock Pages In Memory user rights to use AWE Important It is important that you use Enterprise Manager or DMO to change the service account. Enterprise Manager and DMO will grant all of the privileges and Registry and file permissions needed for SQL Server. The Service Control Panel does NOT grant all the rights or permissions needed to run SQL Server.  Pages are not shareable or page-able  Page protection is limited to read/write  The same physical page cannot be mapped into two separate AWE regions, even within the same process.  The use of AWE/PAE in conjunction with /3GB will limit the maximum amount of supported memory to between 12-16 GB of memory.  Task manager does not show the correct amount of memory allocated to AWE-enabled applications. You must use Memory Manager: Total Server Memory. It should, however, be noted that this only shows memory in use by the buffer pool.  Machines that have PAE enabled will not dump user mode memory. If an event occurs in User Mode Memory that causes a blue screen and root cause determination is absolutely necessary, the machine must be booted with the /NOPAE switch, and with /MAXMEM set to a number appropriate for transferring dump files.  With AWE enabled, SQL Server will, by default, allocate almost all memory during startup, leaving 256 MB or less free. This memory is locked and cannot be paged out. Consuming all available memory may prevent other applications or SQL Server instances from starting. Note PAE is not required to leverage AWE. However, if you have more than 4GB of physical memory you will not be able to access it unless you enable PAE. Caution It is highly recommended that you use the “max server memory” option in combination with “awe enabled” to ensure some memory headroom exists for other applications or instances of SQL Server, because AWE memory cannot be shared or paged. For more information, see the following Knowledge Base articles: Q268363 Intel Physical Addressing Extensions (PAE) in Windows 2000 Q241046 Cannot Create a dump File on Computers with over 4 GB RAM Q255600 Windows 2000 utilities do not display physical memory above 4GB Q274750 How to configure SQL Server memory more than 2 GB (Idea) Q266251 Memory dump stalls when PAE option is enabled (Idea) Tip The KB will return more hits if you query on PAE rather than AWE. Virtual Address Space Mapping Virtual Address Space Mapping By default Windows 2000 (on an X86 platform) uses a two-level (three-level when PAE is enabled) page table structure to translate virtual addresses to physical addresses. Each 32-bit address has three components, as shown below. When a process accesses a virtual address the system must first locate the Page Directory for the current process via register CR3 (X86). The first 10 bits of the virtual address act as an index into the Page Directory. The Page Directory Entry then points to the Page Frame Number (PFN) of the appropriate Page Table. The next 10 bits of the virtual address act as an index into the Page Table to locate the appropriate page. If the page is valid, the PTE contains the PFN of the actual page in memory. If the page is not valid, the memory management fault handler locates the page and attempts to make it valid. The final 12 bits act as a byte offset into the page. Note This multi-step process is expensive. This is why systems have translation look aside buffers (TLB) to speed up the process. One of the reasons context switching is so expensive is the translation buffers must be dumped. Thus, the first few lookups are very expensive. Refer to ISW2K pages 439-440. Core System Memory Related Counters Core System Memory Related Counters When evaluating memory performance you are looking at a wide variety of counters. The counters listed here are a few of the core counters that give you quick overall view of the state of memory. The two key counters are Available Bytes and Committed Bytes. If Committed Bytes exceeds the amount of physical memory in the system, you can be assured that there is some level of hard page fault activity happening. The goal of a well-tuned system is to have as little hard paging as possible. If Available Bytes is below 5 MB, you should investigate why. If Available Bytes is below 4 MB, the Working Set Manager will start to aggressively trim the working sets of process including the system cache.  Committed Bytes Total memory, including physical and page file currently committed  Commit Limit • Physical memory + page file size • Represents the total amount of memory that can be committed without expanding the page file. (Assuming page file is allowed to grow)  Available Bytes Total physical memory currently available Note Available Bytes is a key indicator of the amount of memory pressure. Windows 2000 will attempt to keep this above approximately 4 MB by aggressively trimming the working sets including system cache. If this value is constantly between 3-4 MB, it is cause for investigation. One counter you might expect would be for total physical memory. Unfortunately, there is no specific counter for total physical memory. There are however many other ways to determine total physical memory. One of the most common is by viewing the Performance tab of Task Manager. Page File Usage The only counters that show current page file space usage are Page File:% Usage and Page File:% Peak Usage. These two counters will give you an indication of the amount of space currently used in the page file. Memory Performance Memory Counters There are a number of counters that you need to investigate when evaluating memory performance. As stated previously, no single counter provides the entire picture. You will need to consider many different counters to begin to understand the true state of memory. Note The counters listed are a subset of the counters you should capture. *Available Bytes In general, it is desirable to see Available Bytes above 5 MB. SQL Servers goal on Intel platforms, running Windows NT, is to assure there is approximately 5+ MB of free memory. After Available Bytes reaches 4 MB, the Working Set Manager will start to aggressively trim the working sets of process and, finally, the system cache. This is not to say that working set trimming does not happen before 4 MB, but it does become more pronounced as the number of available bytes decreases below 4 MB. Page Faults/sec Page Faults/sec represents the total number of hard and soft page faults. This value includes the System Working Set as well. Keep this in mind when evaluating the amount of paging activity in the system. Because this counter includes paging associated with the System Cache, a server acting as a file server may have a much higher value than a dedicated SQL Server may have. The System Working Set is covered in depth on the next slide. Because Page Faults/sec includes soft faults, this counter is not as useful as Pages/sec, which represents hard page faults. Because of the associated I/O, hard page faults tend to be much more expensive. *Pages/sec Pages/sec represent the number of pages written/read from disk because of hard page faults. It is the sum of Memory: Pages Input/sec and Memory: Pages Output/sec. Because it is counted in numbers of pages, it can be compared to other counts of pages, such as Memory: Page Faults/sec, without conversion. On a well-tuned system, this value should be consistently low. In and of itself, a high value for this counter does not necessarily indicate a problem. You will need to isolate the paging activity to determine if it is associated with in-paging, out-paging, memory mapped file activity or system cache. Any one of these activities will contribute to this counter. Note Paging in and of itself is not necessarily a bad thing. Paging is only “bad” when a critical process must wait for it’s pages to be in-paged, or when the amount of read/write paging is causing excessive kernel time or disk I/O, thus interfering with normal user mode processing. Tip (Memory: Pages/sec) / (PhysicalDisk: Disk Bytes/sec * 4096) yields the approximate percentage of paging to total disk I/O. Note, this is only relevant on X86 platforms with a 4 KB page size. Page Reads/sec (Hard Page Fault) Page Reads/sec is the number of times the disk was accessed to resolve hard page faults. It includes reads to satisfy faults in the file system cache (usually requested by applications) and in non-cached memory mapped files. This counter counts numbers of read operations, without regard to the numbers of pages retrieved by each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Page Writes/sec (Hard Page Fault) Page Writes/sec is the number of times pages were written to disk to free up space in physical memory. Pages are written to disk only if they are changed while in physical memory, so they are likely to hold data, not code. This counter counts write operations, without regard to the number of pages written in each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. *Pages Input/sec (Hard Page Fault) Pages Input/sec is the number of pages read from disk to resolve hard page faults. It includes pages retrieved to satisfy faults in the file system cache and in non-cached memory mapped files. This counter counts numbers of pages, and can be compared to other counts of pages, such as Memory:Page Faults/sec, without conversion. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. This is one of the key counters to monitor for potential performance complaints. Because a process must wait for a read page fault this counter, read page faults have a direct impact on the perceived performance of a process. *Pages Output/sec (Hard Page Fault) Pages Output/sec is the number of pages written to disk to free up space in physical memory. Pages are written back to disk only if they are changed in physical memory, so they are likely to hold data, not code. A high rate of pages output might indicate a memory shortage. Windows NT writes more pages back to disk to free up space when physical memory is in short supply. This counter counts numbers of pages, and can be compared to other counts of pages, without conversion. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Like Pages Input/sec, this is one of the key counters to monitor. Processes will generally not notice write page faults unless the disk I/O begins to interfere with normal data operations. Demand Zero Faults/Sec (Soft Page Fault) Demand Zero Faults/sec is the number of page faults that require a zeroed page to satisfy the fault. Zeroed pages, pages emptied of previously stored data and filled with zeros, are a security feature of Windows NT. Windows NT maintains a list of zeroed pages to accelerate this process. This counter counts numbers of faults, without regard to the numbers of pages retrieved to satisfy the fault. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. Transition Faults/Sec (Soft Page Fault) Transition Faults/sec is the number of page faults resolved by recovering pages that were on the modified page list, on the standby list, or being written to disk at the time of the page fault. The pages were recovered without additional disk activity. Transition faults are counted in numbers of faults, without regard for the number of pages faulted in each operation. This counter displays the difference between the values observed in the last two samples, divided by the duration of the sample interval. System Working Set System Working Set Like processes, the system page-able code and data are managed by a working set. For the purpose of this course, that working set is referred to as the System Working Set. This is done to differentiate the system cache portion of the working set from the entire working set. There are five different types of pages that make up the System Working Set. They are: system cache; paged pool; page-able code and data in ntoskrnl.exe; page-able code, and data in device drivers and system-mapped views. Unfortunately, some of the counters that appear to represent the system cache actually represent the entire system working set. Where noted system cache actually represents the entire system working set. Note The counters listed are a subset of the counters you should capture. *Memory: Cache Bytes (Represents Total System Working Set) Represents the total size of the System Working Set including: system cache; paged pool; pageable code and data in ntoskrnl.exe; pageable code and data in device drivers; and system-mapped views. Cache Bytes is the sum of the following counters: System Cache Resident Bytes, System Driver Resident Bytes, System Code Resident Bytes, and Pool Paged Resident Bytes. Memory: System Cache Resident Bytes (System Cache) System Cache Resident Bytes is the number of bytes from the file system cache that are resident in physical memory. Windows 2000 Cache Manager works with the memory manager to provide virtual block stream and file data caching. For more information, see also…Inside Windows 2000,Third Edition, pp. 645-650 and p. 656. Memory: Pool Paged Resident Bytes Represents the physical memory consumed by Paged Pool. This counter should NOT be monitored by itself. You must also monitor Memory: Paged Pool. A leak in the pool may not show up in Pool paged Resident Bytes. Memory: System Driver Resident Bytes Represents the physical memory consumed by driver code and data. System Driver Resident Bytes and System Driver Total Bytes do not include code that must remain in physical memory and cannot be written to disk. Memory: System Code Resident Bytes Represents the physical memory consumed by page-able system code. System Code Resident Bytes and System Code Total Bytes do not include code that must remain in physical memory and cannot be written to disk. Working Set Performance Counter You can measure the number of page faults in the System Working Set by monitoring the Memory: Cache Faults/sec counter. Contrary to the “Explain” shown in System Monitor, this counter measures the total amount of page faults/sec in the System Working Set, not only the System Cache. You cannot measure the performance of the System Cache using this counter alone. For more information, see also…Inside Windows 2000,Third Edition, p. 656. Note You will find that in general the working set manager will usually trim the working sets of normal processes prior to trimming the system working set. System Cache System Cache The Windows 2000 cache manager provides a write-back cache with lazy writing and intelligent read-ahead. Files are not written to disk immediately but differed until the cache manager calls the memory manager to flush the cache. This helps to reduce the total number of I/Os. Once per second, the lazy writer thread queues one-eighth of the dirty pages in the system cache to be written to disk. If this is not sufficient to meet the needs, the lazy writer will calculate a larger value. If the dirty page threshold is exceeded prior to lazy writer waking, the cache manager will wake the lazy writer. Important It should be pointed out that mapped files or files opened with FILE_FLAG_NO_BUFFERING, do not participate in the System Cache. For more information regarding mapped views, see also…Inside Windows 2000,Third Edition, p. 669. For those applications that would like to leverage system cache but cannot tolerate write delays, the cache manager supports write through operations via the FILE_FLAG_WRITE_THROUGH. On the other hand, an application can disable lazy writing by using the FILE_ATTRIBUTE_TEMPORARY. If this flag is enabled, the lazy writer will not write the pages to disk unless there is a shortage of memory or the file is closed. Important Microsoft SQL Server uses both FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH Tip The file system cache is not represented by a static amount of memory. The system cache can and will grow. It is not unusual to see the system cache consume a large amount of memory. Like other working sets, it is trimmed under pressure but is generally the last thing to be trimmed. System Cache Performance Counters The counters listed are a subset of the counters you should capture. Cache: Data Flushes/sec Data Flushes/sec is the rate at which the file system cache has flushed its contents to disk as the result of a request to flush or to satisfy a write-through file write request. More than one page can be transferred on each flush operation. Cache: Data Flush Pages/sec Data Flush Pages/sec is the number of pages the file system cache has flushed to disk as a result of a request to flush or to satisfy a write-through file write request. Cache: Lazy Write Flushes/sec Represents the rate of lazy writes to flush the system cache per second. More than one page can be transferred per second. Cache: Lazy Write Pages/sec Lazy Write Pages/sec is the rate at which the Lazy Writer thread has written to disk. Note When looking at Memory:Cache Faults/sec, you can remove cache write activity by subtracting (Cache: Data Flush Pages/sec + Cache: Lazy Write Pages/sec). This will give you a better idea of how much other page faulting activity is associated with the other components of the System Working Set. However, you should note that there is no easy way to remove the page faults associated with file cache read activity. For more information, see the following Knowledge Base articles: Q145952 (NT4) Event ID 26 Appears If Large File Transfer Fails Q163401 (NT4) How to Disable Network Redirector File Caching Q181073 (SQL 6.5) DUMP May Cause Access Violation on Win2000 System Pool System Pool As documented earlier, there are two types of shared pool memory: non-paged pool and paged pool. Like private memory, pool memory is susceptible to a leak. Nonpaged Pool Miscellaneous kernel code and structures, and drivers that need working memory while at or above DPC/dispatch level use non-paged pool. The primary counter for non-paged pool is Memory: Pool Nonpaged Bytes. This counter will usually between 3 and 30 MB. Paged Pool Drivers that do not need to access memory above DPC/Dispatch level are one of the primary users of paged pool, however any process can use paged pool by leveraging the ExAllocatePool calls. Paged pool also contains the Registry and file and printing structures. The primary counters for monitoring paged pool is Memory: Pool Paged Bytes. This counter will usually be between 10-30MB plus the size of the Registry. To determine how much of paged pool is currently resident in physical memory, monitor Memory: Pool Paged Resident Bytes. Note The paged and non-paged pools are two of the components of the System Working Set. If a suspected leak is clearly visible in the overview and not associated with a process, then it is most likely a pool leak. If the leak is not associated with SQL Server handles, OLDEB providers, XPROCS or SP_OA calls then most likely this call should be pushed to the Windows NT group. For more information, see the following Knowledge Base articles: Q265028 (MS) Pool Tags Q258793 (MS) How to Find Memory Leaks by Using Pool Bitmap Analysis Q115280 (MS) Finding Windows NT Kernel Mode Memory Leaks Q177415 (MS) How to Use Poolmon to Troubleshoot Kernel Mode Memory Leaks Q126402 PagedPoolSize and NonPagedPoolSize Values in Windows NT Q247904 How to Configure Paged Pool and System PTE Memory Areas Tip To isolate pool leaks you will need to isolate all drivers and third-party processes. This should be done by disabling each service or driver one at a time and monitoring the effect. You can also monitor paged and non-paged pool through poolmon. If pool tagging has been enabled via GFLAGS, you may be able to associate the leak to a particular tag. If you suspect a particular tag, you should involve the platform support group. Process Memory Counters Process _Total Limitations Although the rollup of _Total for Process: Private Bytes, Virtual Bytes, Handles and Threads, represent the key resources being used across all processes, they can be misleading when evaluating a memory leak. This is because a leak in one process may be masked by a decrease in another process. Note The counters listed are a subset of the counters you should capture. Tip When analyzing memory leaks, it is often easier to a build either a separate chart or report showing only one or two key counters for all process. The primary counter used for leak analysis is private bytes, but processes can leak handles and threads just as easily. After a suspect process is located, build a separate chart that includes all the counters for that process. Individual Process Counters When analyzing individual process for memory leaks you should include the counters listed.  Process: % Processor Time  Process: Working Set (includes shared pages)  Process: Virtual Bytes  Process: Private Bytes  Process: Page Faults/sec  Process: Handle Count  Process: Thread Count  Process: Pool Paged Bytes  Process: Pool Nonpaged Bytes Tip WINLOGON, SVCHOST, services, or SPOOLSV are referred to as HELPER processes. They provide core functionality for many operations and as such are often extended by the addition of third-party DLLs. Tlist –s may help identify what services are running under a particular helper. Helper Processes Helper Processes Winlogon, Services, and Spoolsv and Svchost are examples of what are referred to as HELPER processes. They provide core functionality for many operations and, as such, are often extended by the addition of third-party DLLs. Running every service in its own process can waste system resources. Consequently, some services run in their own processes while others share a process with other services. One problem with sharing a process is that a bug in one service may cause the entire process to fail. The resource kit tool, Tlist when used with the –s qualifier can help you identify what services are running in what processes. WINLOGON Used to support GINAs. SPOOLSV SPOOLSV is responsible for printing. You will need to investigate all added printing functionality. Services Service is responsible for system services. Svchost.exe Svchost.exe is a generic host process name for services that are run from dynamic-link libraries (DLLs). There can be multiple instances of Svchost.exe running at the same time. Each Svchost.exe session can contain a grouping of services, so that separate services can be run depending on how and where Svchost.exe is started. This allows for better control and debugging. The Effect of Memory on Other Components Memory Drives Overall Performance Processor, cache, bus speeds, I/O, all of these resources play a roll in overall perceived performance. Without minimizing the impact of these components, it is important to point out that a shortage of memory can often have a larger perceived impact on performance than a shortage of some other resource. On the other hand, an abundance of memory can often be leveraged to mask bottlenecks. For instance, in certain environments, file system cache can significantly reduce the amount of disk I/O, potentially masking a slow I/O subsystem. Effect on I/O I/O can be driven by a number of memory considerations. Page read/faults will cause a read I/O when a page is not in memory. If the modified page list becomes too long the Modified Page Writer and Mapped Page Writer will need to start flushing pages causing disk writes. However, the one event that can have the greatest impact is running low on available memory. In this case, all of the above events will become more pronounced and have a larger impact on disk activity. Effect on CPU The most effective use of a processor from a process perspective is to spend as much time possible executing user mode code. Kernel mode represents processor time associated with doing work, directly or indirectly, on behalf of a thread. This includes items such as synchronization, scheduling, I/O, memory management, and so on. Although this work is essential, it takes processor cycles and the cost, in cycles, to transition between user and kernel mode is expensive. Because all memory management and I/O functions must be done in kernel mode, it follows that the fewer the memory resources the more cycles are going to be spent managing those resources. A direct result of low memory is that the Working Set Manager, Modified Page Writer and Mapped Page Writer will have to use more cycles attempting to free memory. Analyzing Memory Look for Trends and Trend Relationships Troubleshooting performance is about analyzing trends and trend relationships. Establishing that some event happened is not enough. You must establish the effect of the event. For example, you note that paging activity is high at the same time that SQL Server becomes slow. These two individual facts may or may not be related. If the paging is not associated with SQL Servers working set, or the disks SQL is using there may be little or no cause/affect relationship. Look at Physical Memory First The first item to look at is physical memory. You need to know how much physical and page file space the system has to work with. You should then evaluate how much available memory there is. Just because the system has free memory does not mean that there is not any memory pressure. Available Bytes in combination with Pages Input/sec and Pages Output/sec can be a good indicator as to the amount of pressure. The goal in a perfect world is to have as little hard paging activity as possible with available memory greater than 5 MB. This is not to say that paging is bad. On the contrary, paging is a very effective way to manage a limited resource. Again, we are looking for trends that we can use to establish relationships. After evaluating physical memory, you should be able to answer the following questions:  How much physical memory do I have?  What is the commit limit?  Of that physical memory, how much has the operating system committed?  Is the operating system over committing physical memory?  What was the peak commit charge?  How much available physical memory is there?  What is the trend associated with committed and available? Review System Cache and Pool Contribution After you understand the individual process memory usage, you need to evaluate the System Cache and Pool usage. These can and often represent a significant portion of physical memory. Be aware that System Cache can grow significantly on a file server. This is usually normal. One thing to consider is that the file system cache tends to be the last thing trimmed when memory becomes low. If you see abrupt decreases in System Cache Resident Bytes when Available Bytes is below 5 MB you can be assured that the system is experiencing excessive memory pressure. Paged and non-paged pool size is also important to consider. An ever-increasing pool should be an indicator for further research. Non-paged pool growth is usually a driver issue, while paged pool could be driver-related or process-related. If paged pool is steadily growing, you should investigate each process to see if there is a specific process relationship. If not you will have to use tools such as poolmon to investigate further. Review Process Memory Usage After you understand the physical memory limitations and cache and pool contribution you need to determine what components or processes are creating the pressure on memory, if any. Be careful if you opt to chart the _Total Private Byte’s rollup for all processes. This value can be misleading in that it includes shared pages and can therefore exceed the actual amount of memory being used by the processes. The _Total rollup can also mask processes that are leaking memory because other processes may be freeing memory thus creating a balance between leaked and freed memory. Identify processes that expand their working set over time for further analysis. Also, review handles and threads because both use resources and potentially can be mismanaged. After evaluating the process resource usage, you should be able to answer the following:  Are any of the processes increasing their private bytes over time?  Are any processes growing their working set over time?  Are any processes increasing the number of threads or handles over time?  Are any processes increasing their use of pool over time?  Is there a direct relationship between the above named resources and total committed memory or available memory?  If there is a relationship, is this normal behavior for the process in question? For example, SQL does not commit ‘min memory’ on startup; these pages are faulted in into the working set as needed. This is not necessarily an indication of a memory leak.  If there is clearly a leak in the overview and is not identifiable in the process counters it is most likely in the pool.  If the leak in pool is not associated with SQL Server handles, then more often than not, it is not a SQL Server issue. There is however the possibility that the leak could be associated with third party XPROCS, SP_OA* calls or OLDB providers. Review Paging Activity and Its Impact on CPU and I/O As stated earlier, paging is not in and of itself a bad thing. When starting a process the system faults in the pages of an executable, as they are needed. This is preferable to loading the entire image at startup. The same can be said for memory mapped files and file system cache. All of these features leverage the ability of the system to fault in pages as needed The greatest impact of paging on a process is when the process must wait for an in-page fault or when page file activity represents a significant portion of the disk activity on the disk the application is actively using. After evaluating page fault activity, you should be able to answer the following questions:  What is the relationship between PageFaults/sec and Page Input/sec + Page Output/Sec?  What is the relationship if any between hard page faults and available memory?  Does paging activity represent a significant portion of processor or I/O resource usage? Don’t Prematurely Jump to Any Conclusions Analyzing memory pressure takes time and patience. An individual counter in and of it self means little. It is only when you start to explore relationships between cause and effect that you can begin to understand the impact of a particular counter. The key thoughts to remember are:  With the exception of a swap (when the entire process’s working set has been swapped out/in), hard page faults to resolve reads, are the most expensive in terms its effect on a processes perceived performance.  In general, page writes associated with page faults do not directly affect a process’s perceived performance, unless that process is waiting on a free page to be made available. Page file activity can become a problem if that activity competes for a significant percentage of the disk throughput in a heavy I/O orientated environment. That assumes of course that the page file resides on the same disk the application is using. Lab 3.1 System Memory Lab 3.1 Analyzing System Memory Using System Monitor Exercise 1 – Troubleshooting the Cardinal1.log File Students will evaluate an existing System Monitor log and determine if there is a problem and what the problem is. Students should be able to isolate the issue as a memory problem, locate the offending process, and determine whether or not this is a pool issue. Exercise 2 – Leakyapp Behavior Students will start leaky app and monitor memory, page file and cache counters to better understand the dynamics of these counters. Exercise 3 – Process Swap Due To Minimizing of the Cmd Window Students will start SQL from command line while viewing SQL process performance counters. Students will then minimize the window and note the effect on the working set. Overview What You Will Learn After completing this lab, you will be able to:  Use some of the basic functions within System Monitor.  Troubleshoot one or more common performance scenarios. Before You Begin Prerequisites To complete this lab, you need the following:  Windows 2000  SQL Server 2000  Lab Files Provided  LeakyApp.exe (Resource Kit) Estimated time to complete this lab: 45 minutes Exercise 1 Troubleshooting the Cardinal1.log File In this exercise, you will analyze a log file from an actual system that was having performance problems. Like an actual support engineer, you will not have much information from which to draw conclusions. The customer has sent you this log file and it is up to you to find the cause of the problem. However, unlike the real world, you have an instructor available to give you hints should you become stuck. Goal Review the Cardinal1.log file (this file is from Windows NT 4.0 Performance Monitor, which Windows 2000 can read). Chart the log file and begin to investigate the counters to determine what is causing the performance problems. Your goal should be to isolate the problem to a major area such as pool, virtual address space etc, and begin to isolate the problem to a specific process or thread. This lab requires access to the log file Cardinal1.log located in C:\LABS\M3\LAB1\EX1  To analyze the log file 1. Using the Performance MMC, select the System Monitor snap-in, and click the View Log File Data button (icon looks like a disk). 2. Under Files of type, choose PERFMON Log Files (*.log) 3. Navigate to the folder containing Cardinal1.log file and open it. 4. Begin examining counters to find what might be causing the performance problems. When examining some of these counters, you may notice that some of them go off the top of the chart. It may be necessary to adjust the scale on these. This can be done by right-clicking the rightmost pane and selecting Properties. Select the Data tab. Select the counter that you wish to modify. Under the Scale option, change the scale value, which makes the counter data visible on the chart. You may need to experiment with different scale values before finding the ideal value. Also, it may sometimes be beneficial to adjust the vertical scale for the entire chart. Selecting the Graph tab on the Properties page can do this. In the Vertical scale area, adjust the Maximum and Minimum values to best fit the data on the chart. Lab 3.1, Exercise 1: Results Exercise 2 LeakyApp Behavior In this lab, you will have an opportunity to work with a partner to monitor a live system, which is suffering from a simulated memory leak. Goal During this lab, your goal is to observe the system behavior when memory starts to become a limited resource. Specifically you will want to monitor committed memory, available memory, the system working set including the file system cache and each processes working set. At the end of the lab, you should be able to provide an answer to the listed questions.  To monitor a live system with a memory leak 1. Choose one of the two systems as a victim on which to run the leakyapp.exe program. It is recommended that you boot using the \MAXMEM=128 option so that this lab goes a little faster. You and your partner should decide which server will play the role of the problematic server and which server is to be used for monitoring purposes. 2. On the problematic server, start the leakyapp program. 3. On the monitoring system, create a counter that logs all necessary counters need to troubleshoot a memory problem. This should include physicaldisk counters if you think paging is a problem. Because it is likely that you will only need to capture less than five minutes of activity, the suggested interval for capturing is five seconds. 4. After the counters have been started, start the leaky application program 5. Click Start Leaking. The button will now change to Stop Leaking, which indicates that the system is now leaking memory. 6. After leakyapp shows the page file is 50 percent full, click Stop leaking. Note that the process has not given back its memory, yet. After approximately one minute, exit. Lab 3.1, Exercise 2: Questions After analyzing the counter logs you should be able to answer the following: 1. Under which system memory counter does the leak show up clearly? Memory:Committed Bytes 2. What process counter looked very similar to the overall system counter that showed the leak? Private Bytes 3. Is the leak in Paged Pool, Non-paged pool, or elsewhere? Elsewhere 4. At what point did Windows 2000 start to aggressively trim the working sets of all user processes? <5 MB Free 5. Was the System Working Set trimmed before or after the working sets of other processes? After 6. What counter showed this? Memory:Cache Bytes 7. At what point was the File System Cache trimmed? After the first pass through all other working sets 8. What was the effect on all the processes working set when the application quit leaking? None 9. What was the effect on all the working sets when the application exited? Nothing, initially; but all grew fairly quickly based on use 10. When the server was running low on memory, which was Windows spending more time doing, paging to disk or in-paging? Paging to disk, initially; however, as other applications began to run, in-paging increased Exercise 3 Minimizing a Command Window In this exercise, you will have an opportunity to observe the behavior of Windows 2000 when a command window is minimized. Goal During this lab, your goal is to observe the behavior of Windows 2000 when a command window becomes minimized. Specifically, you will want to monitor private bytes, virtual bytes, and working set of SQL Server when the command window is minimized. At the end of the lab, you should be able to provide an answer to the listed questions.  To monitor a command window’s working set as the window is minimized 1. Using System Monitor, create a counter list that logs all necessary counters needed to troubleshoot a memory problem. Because it is likely that you will only need to capture less than five minutes of activity, the suggested capturing interval is five seconds. 2. After the counters have been started, start a Command Prompt window on the target system. 3. In the command window, start SQL Server from the command line. Example: SQL Servr.exe –c –sINSTANCE1 4. After SQL Server has successfully started, Minimize the Command Prompt window. 5. Wait approximately two minutes, and then Restore the window. 6. Wait approximately two minutes, and then stop the counter log. Lab 3.1, Exercise 3: Questions After analyzing the counter logs you should be able to answer the following questions: 1. What was the effect on SQL Servers private bytes, virtual bytes, and working set when the window was minimized? Private Bytes and Virtual Bytes remained the same, while Working Set went to 0 2. What was the effect on SQL Servers private bytes, virtual bytes, and working set when the window was restored? None; the Working Set did not grow until SQL accessed the pages and faulted them back in on an as-needed basis SQL Server Memory Overview SQL Server Memory Overview Now that you have a better understanding of how Windows 2000 manages memory resources, you can take a closer look at how SQL Server 2000 manages its memory. During the course of the lecture and labs you will have the opportunity to monitor SQL Servers use of memory under varying conditions using both System Monitor counters and SQL Server tools. SQL Server Memory Management Goals Because SQL Server has in-depth knowledge about the relationships between data and the pages they reside on, it is in a better position to judge when and what pages should be brought into memory, how many pages should be brought in at a time, and how long they should be resident. SQL Servers primary goals for management of its memory are the following:  Be able to dynamically adjust for varying amounts of available memory.  Be able to respond to outside memory pressure from other applications.  Be able to adjust memory dynamically for internal components. Items Covered  SQL Server Memory Definitions  SQL Server Memory Layout  SQL Server Memory Counters  Memory Configurations Options  Buffer Pool Performance and Counters  Set Aside Memory and Counters  General Troubleshooting Process  Memory Myths and Tips SQL Server Memory Definitions SQL Server Memory Definitions Pool A group of resources, objects, or logical components that can service a resource allocation request Cache The management of a pool or resource, the primary goal of which is to increase performance. Bpool The Bpool (Buffer Pool) is a single static class instance. The Bpool is made up of 8-KB buffers and can be used to handle data pages or external memory requests. There are three basic types or categories of committed memory in the Bpool.  Hashed Data Pages  Committed Buffers on the Free List  Buffers known by their owners (Refer to definition of Stolen) Consumer A consumer is a subsystem that uses the Bpool. A consumer can also be a provider to other consumers. There are five consumers and two advanced consumers who are responsible for the different categories of memory. The following list represents the consumers and a partial list of their categories  Connection – Responsible for PSS and ODS memory allocations  General – Resource structures, parse headers, lock manager objects  Utilities – Recovery, Log Manager  Optimizer – Query Optimization  Query Plan – Query Plan Storage Advanced Consumer Along with the five consumers, there are two advanced consumers. They are  Ccache – Procedure cache. Accepts plans from the Optimizer and Query Plan consumers. Is responsible for managing that memory and determines when to release the memory back to the Bpool.  Log Cache – Managed by the LogMgr, which uses the Utility consumer to coordinate memory requests with the Bpool. Reservation Requesting the future use of a resource. A reservation is a reasonable guarantee that the resource will be available in the future. Committed Producing the physical resource Allocation The act of providing the resource to a consumer Stolen The act of getting a buffer from the Bpool is referred to as stealing a buffer. If the buffer is stolen and hashed for a data page, it is referred to as, and counted as, a Hashed buffer, not a stolen buffer. Stolen buffers on the other hand are buffers used for things such as procedure cache and SRV_PROC structures. Target Target memory is the amount of memory SQL Server would like to maintain as committed memory. Target memory is based on the min and max server configuration values and current available memory as reported by the operating system. Actual target calculation is operating system specific. Memory to Leave (Set Aside) The virtual address space set aside to ensure there is sufficient address space for thread stacks, XPROCS, COM objects etc. Hashed Page A page in pool that represents a database page. SQL Server Memory Layout Virtual Address Space When SQL Server is started the minimum of physical ram or virtual address space supported by the OS is evaluated. There are many possible combinations of OS versions and memory configurations. For example: you could be running Microsoft Windows 2000 Advanced Server with 2 GB or possibly 4 GB of memory. To avoid page file use, the appropriate memory level is evaluated for each configuration. Important Utilities can inject a DLL into the process address space by using HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Windows\AppInit_DLLs When the USER32.dll library is mapped into the process space, so, too, are the DLLs listed in the Registry key. To determine what DLL’s are running in SQL Server address space you can use tlist.exe. You can also use a tool such as Depends from Microsoft or HandelEx from http://ww.sysinternals.com. Memory to Leave As stated earlier there are many possible configurations of physical memory and address space. It is possible for physical memory to be greater than virtual address space. To ensure that some virtual address space is always available for things such as thread stacks and external needs such as XPROCS, SQL Server reserves a small portion of virtual address space prior to determining the size of the buffer pool. This address space is referred to as Memory To Leave. Its size is based on the number of anticipated tread stacks and a default value for external needs referred to as cmbAddressSave. After reserving the buffer pool space, the Memory To Leave reservation is released. Buffer Pool Space During Startup, SQL Server must determine the maximum size of the buffer pool so that the BUF, BUFHASH and COMMIT BITMAP structures that are used to manage the Bpool can be created. It is important to understand that SQL Server does not take ‘max memory’ or existing memory pressure into consideration. The reserved address space of the buffer pool remains static for the life of SQL Server process. However, the committed space varies as necessary to provide dynamic scaling. Remember only the committed memory effects the overall memory usage on the machine. This ensures that the max memory configuration setting can be dynamically changed with minimal changes needed to the Bpool. The reserved space does not need to be adjusted and is maximized for the current machine configuration. Only the committed buffers need to be limited to maintain a specified max server memory (MB) setting. SQL Server Startup Pseudo Code The following pseudo code represents the process SQL Server goes through on startup. Warning This example does not represent a completely accurate portrayal of the steps SQL Server takes when initializing the buffer pool. Several details have been left out or glossed over. The intent of this example is to help you understand the general process, not the specific details.  Determine the size of cmbAddressSave (-g)  Determine Total Physical Memory  Determine Available Physical Memory  Determine Total Virtual Memory  Calculate MemToLeave maxworkterthreads * (stacksize=512 KB) + (cmbAddressSave = 256 MB)  Reserve MemToLeave and set PAGE_NOACCESS  Check for AWE, test to see if it makes sense to use it and log the results • Min(Available Memory, Max Server Memory) > Virtual Memory • Supports Read Scatter • SQL Server not started with -f • AWE Enabled via sp_configure • Enterprise Edition • Lock Pages In Memory user right enabled  Calculate Virtual Address Limit VA Limit = Min(Physical Memory, Virtual Memory – MemtoLeave)  Calculate the number of physical and virtual buffers that can be supported AWE Present Physical Buffers = (RAM / (PAGESIZE + Physical Overhead)) Virtual Buffers = (VA Limit / (PAGESIZE + Virtual Overhead)) AWE Not Present Physical Buffers = Virtual Buffers = VA Limit / (PAGESIZE + Physical Overhead + Virtual Overhead)  Make sure we have the minimum number of buffers Physical Buffers = Max(Physical Buffers, MIN_BUFFERS)  Allocate and commit the buffer management structures  Reserve the address space required to support the Bpool buffers  Release the MemToLeave SQL Server Startup Pseudo Code Example The following is an example based on the pseudo code represented on the previous page. This example is based on a machine with 384 MB of physical memory, not using AWE or /3GB. Note CmbAddressSave was changed between SQL Server 7.0 and SQL Server 2000. For SQL Server 7.0, cmbAddressSave was 128. Warning This example does not represent a completely accurate portrayal of the steps SQL Server takes when initializing the buffer pool. Several details have been left out or glossed over. The intent of this example is to help you understand the general process, not the specific details.  Determine the size of cmbAddressSave (No –g so 256MB)  Determine Total Physical Memory (384)  Determine Available Physical Memory (384)  Determine Total Virtual Memory (2GB)  Calculate MemToLeave maxworkterthreads * (stacksize=512 KB) + (cmbAddressSave = 256 MB) (255 * .5MB + 256MB = 384MB)  Reserve MemToLeave and set PAGE_NOACCESS  Check for AWE, test to see if it makes sense to use it and log the results (AWE Not Enabled)  Calculate Virtual Address Limit VA Limit = Min(Physical Memory, Virtual Memory – MemtoLeave) 384MB = Min(384MB, 2GB – 384MB)  Calculate the number of physical and virtual buffers that can be supported AWE Not Present 48664 (approx) = 384 MB / (8 KB + Overhead)  Make sure we have the minimum number of buffers Physical Buffers = Max(Physical Buffers, MIN_BUFFERS) 48664 = Max(48664,1024)  Allocate and commit the buffer management structures  Reserve the address space required to support the Bpool buffers  Release the MemToLeave Tip Trace Flag 1604 can be used to view memory allocations on startup. The cmbAddressSave can be adjusted using the –g XXX startup parameter. SQL Server Memory Counters SQL Server Memory Counters The two primary tools for monitoring and analyzing SQL Server memory usage are System Monitor and DBCC MEMORYSTATUS. For detailed information on DBCC MEMORYSTATUS refer to Q271624 Interpreting the Output of the DBCC MEMORYSTAUS Command. Important Represents SQL Server 2000 Counters. The counters presented are not the same as the counters for SQL Server 7.0. The SQL Server 7.0 counters are listed in the appendix. Determining Memory Usage for OS and BPOOL Memory Manager: Total Server memory (KB) - Represents all of SQL usage Buffer Manager: Total Pages - Represents total bpool usage To determine how much of Total Server Memory (KB) represents MemToLeave space; subtract Buffer Manager: Total Pages. The result can be verified against DBCC MEMORYSTATUS, specifically Dynamic Memory Manager: OS In Use. It should however be noted that this value only represents requests that went thru the bpool. Memory reserved outside of the bpool by components such as COM objects will not show up here, although they will count against SQL Server private byte count. Buffer Counts: Target (Buffer Manager: Target Pages) The size the buffer pool would like to be. If this value is larger than committed, the buffer pool is growing. Buffer Counts: Committed (Buffer Manager: Total Pages) The total number of buffers committed in the OS. This is the current size of the buffer pool. Buffer Counts: Min Free This is the number of pages that the buffer pool tries to keep on the free list. If the free list falls below this value, the buffer pool will attempt to populate it by discarding old pages from the data or procedure cache. Buffer Distribution: Free (Buffer Manager / Buffer Partition: Free Pages) This value represents the buffers currently not in use. These are available for data or may be requested by other components and mar

2009-11-27

PowerDesigner逆向工程详细教程

1 PowerDesigner逆向工程 传说中,程序员们喜欢用powerDesign进行数据库建模。通常都是先设计出物理模型图,在转换出数据库需要的SQL语句,从而生成数据库。 但,江湖中流传着“powerDesign逆向工程”的传说。好,我们今天就来利用PowerDesign来建立逆向工程。 基于MySql 5.0 的数据库,PowerDesigner12.主要分为这几个步骤。 1> 通过windows数据源管理,建立ODBC数据源。 • 首先,安装ODBC的补丁。 • 这里是mySql 3.5.1 和 mySql5.1.5的补丁文件。使用他们进行安装。 • 打开Windows的控制面板 • 打开管理工具 • 打开数据源(ODBC) • 选择你要操作的数据库类型 • 输入数据库参数,并测试连接 2> 通过powerDesigner使用ODBC数据源,建立PowerDesigner的数据源。 • 新建物理模型. • 选择数据库(DataBase)---连接(Connect)。 • 选择已经配置好的ODBC数据源。 • 输入登录数据库的【用户名】和【密码】。 2> 使用PowerDesigner进行逆向工程。 • 选择 DataBase----Reverse Engineer Database。 在 15.0.0.2613版本中逆向菜单已换成 • 使用已经配置好的数据源。 • 选择你感兴趣的内容。 • 生成物理模型。 • 我们稍做等待,我出现了我们需要的数据模型。如下图: • 放大后 • 到这里,我们的逆向工程,就完成了。

2009-11-24

SQL示例大全.pdf

精心收集整理的各种SQL语句示例,帮助您更加容易的熟悉和使SQL。 1 DECLARE @local_variable 1.1 使用 DECLARE 以下示例将使用名为 @find 的局部变量检索所有姓氏以“Man”开头的联系人信息。 USE AdventureWorks; GO DECLARE @find varchar(30); SET @find = 'Man%'; SELECT LastName, FirstName, Phone FROM Person.Contact WHERE LastName LIKE @find; 2 EXECUTE 2.1 使用 EXECUTE 传递单个参数 uspGetEmployeeManagers 存储过程需要一个参数 (@EmployeeID)。 以下示例执行 uspGetEmployeeManagers 存储过程,以 Employee ID 6 作为参数值。 USE AdventureWorks; GO EXEC dbo.uspGetEmployeeManagers 6; GO 2.2 在执行过程中变量可以显式命名: EXEC dbo.uspGetEmployeeManagers @EmployeeID = 6; GO 2.3 使用多个参数 以下示例执行 spGetWhereUsedProductID 存储过程。 该存储过程将传递两个参数:第一个参数为产品 ID (819),第二个参数 @CheckDate, 是 datetime 值。 USE AdventureWorks; GO DECLARE @CheckDate datetime; SET @CheckDate = GETDATE(); EXEC dbo.uspGetWhereUsedProductID 819, @CheckDate; GO 3 sp_executesql 执行可以多次重复使用或动态生成的 Transact-SQL 语句或批处理。Transact-SQL 语句或批处理可以包含嵌入参数。 sp_executesql 支持与 Transact-SQL 字符串分开的参数值的设置,如以下示例所示。 DECLARE @IntVariable int; DECLARE @SQLString nvarchar(500); DECLARE @ParmDefinition nvarchar(500); /* Build the SQL string one time.*/ SET @SQLString = N'SELECT EmployeeID, NationalIDNumber, Title, ManagerID FROM AdventureWorks.HumanResources.Employee WHERE ManagerID = @ManagerID'; SET @ParmDefinition = N'@ManagerID tinyint'; /* Execute the string with the first parameter value. */ SET @IntVariable = 197; EXECUTE sp_executesql @SQLString, @ParmDefinition, @ManagerID = @IntVariable; /* Execute the same string with the second parameter value. */ SET @IntVariable = 109; EXECUTE sp_executesql @SQLString, @ParmDefinition, @ManagerID = @IntVariable; 输出参数也可用于 sp_executesql。以下示例从 AdventureWorks.HumanResources.Employee 表中检索职务,并在输出参数 @max_title 中返回它。 DECLARE @IntVariable int; DECLARE @SQLString nvarchar(500); DECLARE @ParmDefinition nvarchar(500); DECLARE @max_title varchar(30); SET @IntVariable = 197; SET @SQLString = N'SELECT @max_titleOUT = max(Title) FROM AdventureWorks.HumanResources.Employee WHERE ManagerID = @level'; SET @ParmDefinition = N'@level tinyint, @max_titleOUT varchar(30) OUTPUT'; EXECUTE sp_executesql @SQLString, @ParmDefinition, @level = @IntVariable, @max_titleOUT=@max_title OUTPUT; SELECT @max_title; 3.1 执行简单的 SELECT 语句 以下示例将创建并执行一个简单的 SELECT 语句,其中包含名为 @level 的嵌入参数。 EXECUTE sp_executesql N'SELECT * FROM AdventureWorks.HumanResources.Employee WHERE ManagerID = @level', N'@level tinyint', @level = 109; 3.2 执行动态生成的字符串 以下示例显示使用 sp_executesql 执行动态生成的字符串。该示例中的存储过程用于向一组表中插入数据,这些表用于划分一年的销售数据。一年中的每个月均有一个表,格式如下: CREATE TABLE May1998Sales (OrderID int PRIMARY KEY, CustomerID int NOT NULL, OrderDate datetime NULL CHECK (DATEPART(yy, OrderDate) = 1998), OrderMonth int CHECK (OrderMonth = 5), DeliveryDate datetime NULL, CHECK (DATEPART(mm, OrderDate) = OrderMonth) ) 3.3 使用 OUTPUT 参数 以下示例使用 OUTPUT 参数将由 SELECT 语句生成的结果集存储于 @SQLString 参数中。然后将执行两个使用 OUTPUT 参数值的 SELECT 语句。 USE AdventureWorks; GO DECLARE @SQLString nvarchar(500); DECLARE @ParmDefinition nvarchar(500); DECLARE @SalesOrderNumber nvarchar(25); DECLARE @IntVariable int; SET @SQLString = N'SELECT @SalesOrderOUT = MAX(SalesOrderNumber) FROM Sales.SalesOrderHeader WHERE CustomerID = @CustomerID'; SET @ParmDefinition = N'@CustomerID int, @SalesOrderOUT nvarchar(25) OUTPUT'; SET @IntVariable = 22276; EXECUTE sp_executesql @SQLString, @ParmDefinition, @CustomerID = @IntVariable, @SalesOrderOUT = @SalesOrderNumber OUTPUT; -- This SELECT statement returns the value of the OUTPUT parameter. SELECT @SalesOrderNumber; -- This SELECT statement uses the value of the OUTPUT parameter in -- the WHERE clause. SELECT OrderDate, TotalDue FROM Sales.SalesOrderHeader WHERE SalesOrderNumber = @SalesOrderNumber; 4 CAST 和 CONVERT 将一种数据类型的表达式显式转换为另一种数据类型的表达式。CAST 和 CONVERT 提供相似的功能。 DECLARE @myval decimal (5, 2) SET @myval = 193.57 SELECT CAST(CAST(@myval AS varbinary(20)) AS decimal(10,5)) -- Or, using CONVERT SELECT CONVERT(decimal(10,5), CONVERT(varbinary(20), @myval)) 4.1 同时使用 CAST 和 CONVERT -- Use CAST USE AdventureWorks; GO SELECT SUBSTRING(Name, 1, 30) AS ProductName, ListPrice FROM Production.Product WHERE CAST(ListPrice AS int) LIKE '3%'; GO -- Use CONVERT. USE AdventureWorks; GO SELECT SUBSTRING(Name, 1, 30) AS ProductName, ListPrice FROM Production.Product WHERE CONVERT(int, ListPrice) LIKE '3%'; GO 4.2 使用包含算术运算符的 CAST USE AdventureWorks; GO SELECT CAST(ROUND(SalesYTD/CommissionPCT, 0) AS int) AS 'Computed' FROM Sales.SalesPerson WHERE CommissionPCT != 0; GO 4.3 使用 CAST 进行连接 USE AdventureWorks; GO SELECT 'The list price is ' + CAST(ListPrice AS varchar(12)) AS ListPrice FROM Production.Product WHERE ListPrice BETWEEN 350.00 AND 400.00; GO 4.4 使用 CAST 生成可读性更高的文本 USE AdventureWorks; GO SELECT DISTINCT CAST(p.Name AS char(10)) AS Name, s.UnitPrice FROM Sales.SalesOrderDetail s JOIN Production.Product p on s.ProductID = p.ProductID WHERE Name LIKE 'Long-Sleeve Logo Jersey, M'; GO 4.5 对日期时间数据使用 CAST 和 CONVERT SELECT GETDATE() AS UnconvertedDateTime, CAST(GETDATE() AS nvarchar(30)) AS UsingCast, CONVERT(nvarchar(30), GETDATE(), 126) AS UsingConvertTo_ISO8601 ; GO 5 SELECT 5.1 将 DISTINCT 与 SELECT 一起使用 以下示例使用 DISTINCT 以避免检索重复标题。 USE AdventureWorks ; GO SELECT DISTINCT Title FROM HumanResources.Employee ORDER BY Title ; GO 5.2 使用 SELECT INTO 创建表 以下第一个示例将在 tempdb 中创建一个名为 #Bicycles 的临时表。若要使用该表,则必须使用与下面显示的名称完全相同的名称进行引用。这包括数字符号 (#)。 USE tempdb ; IF OBJECT_ID (N'#Bicycles',N'U') IS NOT NULL DROP TABLE #Bicycles ; GO USE AdventureWorks; GO SET NOCOUNT ON SELECT * INTO #Bicycles FROM Production.Product WHERE ProductNumber LIKE 'BK%' SET NOCOUNT OFF SELECT name FROM tempdb..sysobjects WHERE name LIKE '#Bicycles%' ; GO 5.3 使用相关子查询 以下示例显示了语义等价的查询并说明了使用 EXISTS 关键字和 IN 关键字的区别。两个都是有效子查询示例,用于检索产品型号为长袖标志运动衫且 ProductModelID 编号在 Product 和 ProductModel 两个表中相匹配的每种产品名称的实例。 USE AdventureWorks ; GO SELECT DISTINCT Name FROM Production.Product p WHERE EXISTS (SELECT * FROM Production.ProductModel pm WHERE p.ProductModelID = pm.ProductModelID AND pm.Name = 'Long-sleeve logo jersey') ; GO -- OR USE AdventureWorks ; GO SELECT DISTINCT Name FROM Production.Product WHERE ProductModelID IN (SELECT ProductModelID FROM Production.ProductModel WHERE Name = 'Long-sleeve logo jersey') ; GO 5.4 使用 GROUP BY 以下示例查找数据库中各销售订单的总额。 USE AdventureWorks ; GO SELECT SalesOrderID, SUM(LineTotal) AS SubTotal FROM Sales.SalesOrderDetail sod GROUP BY SalesOrderID ORDER BY SalesOrderID ; GO 由于使用了 GROUP BY 子句,因此针对每个销售订单只返回一行销售总额。 5.5 对多个组使用 GROUP BY 以下示例查找按产品 ID 和特价产品 ID 分组的平均价格和迄今为止的年销售总额。 Use AdventureWorks SELECT ProductID, SpecialOfferID, AVG(UnitPrice) AS 'Average Price', SUM(LineTotal) AS SubTotal FROM Sales.SalesOrderDetail GROUP BY ProductID, SpecialOfferID ORDER BY ProductID GO 5.6 使用 GROUP BY 和 WHERE 以下示例在只检索标价大于 $1000 的行后,将结果进行分组。 USE AdventureWorks; GO SELECT ProductModelID, AVG(ListPrice) AS 'Average List Price' FROM Production.Product WHERE ListPrice > $1000 GROUP BY ProductModelID ORDER BY ProductModelID ; GO 5.7 将 GROUP BY 与表达式一起使用 以下示例按表达式进行分组。如果表达式不包含聚合函数,则可以按表达式进行分组。 USE AdventureWorks ; GO SELECT AVG(OrderQty) AS 'Average Quantity', NonDiscountSales = (OrderQty * UnitPrice) FROM Sales.SalesOrderDetail sod GROUP BY (OrderQty * UnitPrice) ORDER BY (OrderQty * UnitPrice) DESC ; GO 6 日期和时间 Syntax for CAST: CAST ( expression AS data_type [ (length ) ]) Syntax for CONVERT: CONVERT ( data_type [ ( length ) ] , expression [ , style ] ) 不带世纪数位 (yy) (1) 带世纪数位 (yyyy) 标准 输入/输出 (3) - 0 或 100 (1, 2) 默认设置 mon dd yyyy hh:miAM(或 PM) 1 101 美国 mm/dd/yyyy 2 102 ANSI yy.mm.dd 3 103 英国/法国 dd/mm/yy 4 104 德国 dd.mm.yy 5 105 意大利 dd-mm-yy 6 106 (1) - dd mon yy 7 107 (1) - mon dd, yy 8 108 - hh:mi:ss - 9 或 109 (1, 2) 默认设置 + 毫秒 mon dd yyyy hh:mi:ss:mmmAM(或 PM) 10 110 美国 mm-dd-yy 11 111 日本 yy/mm/dd 12 112 ISO yymmdd - 13 或 113 (1, 2) 欧洲默认设置 + 毫秒 dd mon yyyy hh:mi:ss:mmm(24h) 14 114 - hh:mi:ss:mmm(24h) - 20 或 120 (2) ODBC 规范 yyyy-mm-dd hh:mi:ss(24h) - 21 或 121 (2) ODBC 规范(带毫秒) yyyy-mm-dd hh:mi:ss.mmm(24h) - 126 (4) ISO8601 yyyy-mm-ddThh:mi:ss.mmm(无空格) 127(6, 7) 带时区 Z 的 ISO8601。 yyyy-mm-ddThh:mi:ss.mmmZ (无空格) - 130 (1, 2) 回历 (5) dd mon yyyy hh:mi:ss:mmmAM - 131 (2) 回历 (5) dd/mm/yy hh:mi:ss:mmmAM Select CONVERT(varchar(100), GETDATE(), 0): 05 16 2006 10:57AM Select CONVERT(varchar(100), GETDATE(), 1): 05/16/06 Select CONVERT(varchar(100), GETDATE(), 2): 06.05.16 Select CONVERT(varchar(100), GETDATE(), 3): 16/05/06 Select CONVERT(varchar(100), GETDATE(), 4): 16.05.06 Select CONVERT(varchar(100), GETDATE(), 5): 16-05-06 Select CONVERT(varchar(100), GETDATE(), 6): 16 05 06 Select CONVERT(varchar(100), GETDATE(), 7): 05 16, 06 Select CONVERT(varchar(100), GETDATE(), 8): 10:57:46 Select CONVERT(varchar(100), GETDATE(), 9): 05 16 2006 10:57:46:827AM Select CONVERT(varchar(100), GETDATE(), 10): 05-16-06 Select CONVERT(varchar(100), GETDATE(), 11): 06/05/16 Select CONVERT(varchar(100), GETDATE(), 12): 060516 Select CONVERT(varchar(100), GETDATE(), 13): 16 05 2006 10:57:46:937 Select CONVERT(varchar(100), GETDATE(), 14): 10:57:46:967 Select CONVERT(varchar(100), GETDATE(), 20): 2006-05-16 10:57:47 Select CONVERT(varchar(100), GETDATE(), 21): 2006-05-16 10:57:47.157 Select CONVERT(varchar(100), GETDATE(), 22): 05/16/06 10:57:47 AM Select CONVERT(varchar(100), GETDATE(), 23): 2006-05-16 Select CONVERT(varchar(100), GETDATE(), 24): 10:57:47 Select CONVERT(varchar(100), GETDATE(), 25): 2006-05-16 10:57:47.250 Select CONVERT(varchar(100), GETDATE(), 100): 05 16 2006 10:57AM Select CONVERT(varchar(100), GETDATE(), 101): 05/16/2006 Select CONVERT(varchar(100), GETDATE(), 102): 2006.05.16 Select CONVERT(varchar(100), GETDATE(), 103): 16/05/2006 Select CONVERT(varchar(100), GETDATE(), 104): 16.05.2006 Select CONVERT(varchar(100), GETDATE(), 105): 16-05-2006 Select CONVERT(varchar(100), GETDATE(), 106): 16 05 2006 Select CONVERT(varchar(100), GETDATE(), 107): 05 16, 2006 Select CONVERT(varchar(100), GETDATE(), 108): 10:57:49 Select CONVERT(varchar(100), GETDATE(), 109): 05 16 2006 10:57:49:437AM Select CONVERT(varchar(100), GETDATE(), 110): 05-16-2006 Select CONVERT(varchar(100), GETDATE(), 111): 2006/05/16 Select CONVERT(varchar(100), GETDATE(), 112): 20060516 Select CONVERT(varchar(100), GETDATE(), 113): 16 05 2006 10:57:49:513 Select CONVERT(varchar(100), GETDATE(), 114): 10:57:49:547 Select CONVERT(varchar(100), GETDATE(), 120): 2006-05-16 10:57:49 Select CONVERT(varchar(100), GETDATE(), 121): 2006-05-16 10:57:49.700 Select CONVERT(varchar(100), GETDATE(), 126): 2006-05-16T10:57:49.827 Select CONVERT(varchar(100), GETDATE(), 130): 18 ???? ?????? 1427 10:57:49:907AM Select CONVERT(varchar(100), GETDATE(), 131): 18/04/1427 10:57:49:920AM

2009-11-20

Graphics Programming with GDI+

Graphics Programming with GDI+    GDI+是新一代的图形接口。如果要设计.NET Framework图形应用程序,就必须使用GDI+。本书是一本为.NET开发人员讲授如何编写Windows和Web图形应用程序的专著,书中全面介绍了GDI+和Windows图形程序设计的基本知识和GDI+图形程序设计的各个方面。 本书适合于开发GDI+图形应用程序的初、中级程序员阅读,书中给出了大量用C#语言编写的可重用示例代码,可以使读者更快地掌握书中所介绍的各种知识和概念。本书也可以作为大专院校相关课程的重要辅导教材。 《GDI+图形程序设计)是为.NET开发人员介绍如何编写Windows和Web图形应用程序的指南用书。通过大量详尽的实例,本书使有经验的程序员可以更深入地理解在.NET Framework类库中定义的整个GDI+API。 本书从介绍GDI+和Windows图形程序设计的基本知识开始,其核心是对一些实际问题的指导,包括如何使用WindowsForms及如何优化GDI+的性能。本书通过一些例子来说明如何开发真实世界的工具,如GDI+Painter,GDI+Editor、ImageViewer和lnmgeAninmtor等。另外,作者还给出了大量使用C#语言编写的可重用示例代码,读者可从阔上下载完整的C#和VisualBasic.NET源代码,并可遣过这些源代码查看书中各图的彩色效果。 本书主要内容包括: ●比较GDI+与GDI ●GDI+在.NETFramework中的定义和使用 ●绘制和填充图形对象 ●查看和操作图像 ●图形对象,图像和颜色等的变形 ●.NET中的打印 ●开发GDI+Web应用程序 ●优化绘图质量和性能 ●交互式颜色混合和透明颜色 ●GDI瓦操作性 ●回答一些常见的GDI+问题

2009-09-17

界面技术-对话框的内容滚动显示.doc

Selecting the "Horizontal Scroll" and "Verticla Scroll" styles among the properties of your dialog box in the resource editor, you can add scroll bars to the dialog box. Remember also to select the 'resizing' border style. However for adding functionality to the scroll bars, you need to override the WM_VSCROLL and WM_HSCROLL message handlers. Also,override the WM_SIZE handler to set the scroll bar range if the size is reduced than the original. So you get the original size of the dialog in your OninitDialog(). The code would look something like this. Modify to your needs. 1. To OnInitDialog(),add the following line. GetWindowRect(m_rect); m_nScrollPos = 0; to get the original window size. Make m_rect a member variable of your dialog. Add another variable m_nScrollPos and initialize its value to zero. It stores the current vertical scroll position. 2. Here is the WM_SIZE handler for setting the scroll bar range.Set range 0 if size is increased more than original. void CCharlesDlg::OnSize(UINT nType, int cx, int cy) { CDialog::OnSize(nType, cx, cy); // TODO: Add your message handler code here m_nCurHeight = cy; int nScrollMax; if (cy < m_rect.Height()) { nScrollMax = m_rect.Height() - cy; } else nScrollMax = 0; SCROLLINFO si; si.cbSize = sizeof(SCROLLINFO); si.fMask = SIF_ALL;// SIF_ALL = SIF_PAGE | SIF_RANGE | SIF_POS; si.nMin = 0; si.nMax = nScrollMax; si.nPage = si.nMax/10; si.nPos = 0; SetScrollInfo(SB_VERT, &si, TRUE); } You need m_nCurHeight to store the current height of the dialog and use it to handle the scrolling in OnVScroll. m_ncurHeight is also a member variable of the dialog. 3. Here is the handler for WM_VSCROLL. void CCharlesDlg::OnVScroll(UINT nSBCode,UINT nPos,CScrollBar* pScrollBar) { //TODO:Add your message handler code here and/or call default int nDelta; int nMaxPos = m_rect.Height() - m_nCurHeight; switch(nSBCode) { case SB_LINEDOWN: if (m_nScrollPos >= nMaxPos) return; nDelta = min(nMaxPos/100,nMaxPos-m_nScrollPos); break; case SB_LINEUP: if (m_nScrollPos <= 0) return; nDelta = -min(nMaxPos/100,m_nScrollPos); break; case SB_PAGEDOWN: if (m_nScrollPos >= nMaxPos) return; nDelta = min(nMaxPos/10,nMaxPos-m_nScrollPos); break; case SB_THUMBPOSITION: nDelta = (int)nPos - m_nScrollPos; break; case SB_PAGEUP: if (m_nScrollPos <= 0) return; nDelta = -min(nMaxPos/10,m_nScrollPos); break; default: return; } m_nScrollPos += nDelta; SetScrollPos(SB_VERT,m_nScrollPos,TRUE); ScrollWindow(0,-nDelta); CDialog::OnVScroll(nSBCode, nPos, pScrollBar); } The above code handles the vertical scrolling. For horizontal scrolling add the WM_HSCROLL similarly and add the necessary code to OnSize and OnInitDialog. Information provided in this document and any software that may accompany this document is provided "as is" without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. The user assumes the entire risk as to the accuracy and the use of this information.

2009-08-27

Visual Assistant 10 设置图文教程

1 编辑快捷键 2 下划线设置 3 重构 4 显示上下文列表 5 导出设置 6 取消每日提示 7 高亮当前行

2009-07-11

Windows窗口样式-PDF

本文全面的概括了基本Windows控件的窗口样式,并以中文注释这些样式,方便开发时快速查看。 1 窗口样式 WS_POPUP 弹出式窗口(不能与WS_CHILDWINDOW样式同时使用) WS_CHILDWINDOW 子窗口(不能与WS_POPUP合用) WS_MINIMIZE 创建窗口拥有最小化按钮 WS_MINIMIZEBOX 创建窗口拥有最小化按钮,须同时指定WS_SYSTEM样式 WS_VISIBLE 可见状态 WS_DISABLED 不可用状态 WS_CLIPSIBLINGS 使窗口排除子窗口之间的相对区域 WS_CLIPCHILDREN 当在父窗口内绘图时,排除子窗口区域 WS_MAXIMIZE 具有最大化按钮 WS_MAXIMIZEBOX 创建窗口拥有最大化按钮,须同时指定WS_SYSTEM样式 WS_CAPTION 有标题框和边框(和WS_TILED样式相同) WS_BORDER 有单边框 WS_DLGFRAME 带对话框边框样式,不带标题框 WS_VSCROLL 有垂直滚动条 WS_HSCROLL 有水平滚动条 WS_SYSMENU 标题框上带有窗口菜单(须指定WS_CAPTION样式) WS_THICKFRAME 有可调边框(与WS_SIZEBOX样式相同) WS_TILED 与WS_OVERLAPPED风格相同 WS_TILEDWINDOW 与WWS_OVERLAPPEDWINDOW风格相同 WS_GROUP 组样式,每个组的第一个控件具有WS_TABSTOP样式 WS_TABSTOP 可接受TAB键 WS_OVERLAPPED 创建一个重叠式窗口,拥有标题栏和边框 WS_OVERLAPPEDWINDOW WS_OVERLAPPED风格 WS_CAPTION风格 WS_SYSMENU风格 WS_THICKFRAME风格 WS_MINIMIZEBOX风格 WS_MAXIMIZEBOX风格 2 窗口扩展样式参考列表

2009-07-02

Visio2000技术大全-PDF

本部分包含下列内容: • 第1章理解Visio 2000 • 第2章漫游Visio 2000工程环境 • 第3章熟悉Vi s i o工程 第一部分简单介绍Visio 2000。本部分介绍该软件的功能和最新特征,还介绍专业人员使 用Vi s i o进行的一些商业绘图任务,同时简要介绍了公司的历史和软件系列,以及漫游Vi s i o 2 0 0 0启动窗口功能。 第1章理解Visio 2000 本章包含下列内容: • 了解Vi s i o。 • 了解自己适合使用哪种Vi s i o程序。 • 理解Visio 2000的功能。 Vi s i o是一种绘图程序,特别适合于业务人员使用。然而,因为使用该软件创建图表和可 视化演示文稿非常方便,以至于各种人士都可用来进行自己的工作。毕竟,在许多场合中都 需要使用可视化处理,无论是打算美化住所环境还是在办公室中制作数据库结构模型,使用 Vi s i o绘图软件会让人们感觉既精确又容易。

2009-07-02

JSP高级编程05-PDF

JSP是一种如日中天的新型Internet/Intranet开发语言,可以在多种操作系统平台和多种Web服务器下使用。本书从最基础的JSP开发开始,循序渐进地介绍了JSP 开发技术,并涵盖了许多高级主题,如需要在企业级Web应用中使用的特性—Enterprise JavaBeans、JDBC 2.0、数据库连接池和自定义标签库。本书既适合初学者阅读,也适合具有一定JSP基础的开发人员深入研究使用。 前 言 JSP是SUN公司推出的一种新型的Internet/Intranet开发语言,和前一代Internet/Intranet开发语言(ASP、PHP)相比,JSP在以下几个方面有了重大的突破: 1) 通过JSP的扩展标签库和JavaBeans功能,网站逻辑和网站界面可以完美地分离。 2) 使用Enterprise JavaBeans,可以轻松地在JSP开发的Web中实现事务、安全、会话等等企业级应用所需要的功能。 3) JDBC2.0提供了不同的数据库产品无关的数据库连接方式,更重要的是,数据库连接池提供了一种比普通的数据库连接方式效率高得多的连接方式。 JSP的语法基本上和Java是相同的,有Java基础的读者可以很快学会如何使用JSP,而没有Java语言基础的读者,只要循序渐进地阅读本书,一样可以成为JSP编程的高手。本书主要分为两个部分:第一部分为JSP基础部分。通过这一部分的学习,读者可以掌握JSP的基本使用方法,学会如何使虽JSP来开发一般的中、小型Web应用。这一部分使用常见的Apache Group的Tomcat作为JSP引擎的例子。第二部分为JSP高级应用部分。这一部分主要讲述如何使用JSP进行大型Web应用的开发,为了方便读者学习,本书还专门讲述了SUN公司的J2SDKEE和B趴公司的Webloglc应用服务器的基本使用方法。 JSP可以在各种操作系统和各种Web服务器下使用,其代码基本上不需要任何改动就可以使用。本书为了适应大多数读者的情况,使用了Windows操作系统作为例子,具体的试验平台如下: Windows 2000Advanced Server Apachel.3.14 Intemetln允rmation Server 5.0 Tomcat 3.1 J2SDKEE l.2 BEA Wiblogic 5.1 除了上述平台,书中的代码还在如下平台进行了测试: RedhatUnux 6.1 Apache l.3.12 TOmcat 3.1 BEA Weblogic 4.51 数据库系统主要使用了Microsoft SQL Server 7.0,部分代码使用了MySQL。作者 2000.11

2009-07-01

JSP高级编程04-PDF

JSP是一种如日中天的新型Internet/Intranet开发语言,可以在多种操作系统平台和多种Web服务器下使用。本书从最基础的JSP开发开始,循序渐进地介绍了JSP 开发技术,并涵盖了许多高级主题,如需要在企业级Web应用中使用的特性—Enterprise JavaBeans、JDBC 2.0、数据库连接池和自定义标签库。本书既适合初学者阅读,也适合具有一定JSP基础的开发人员深入研究使用。 前 言 JSP是SUN公司推出的一种新型的Internet/Intranet开发语言,和前一代Internet/Intranet开发语言(ASP、PHP)相比,JSP在以下几个方面有了重大的突破: 1) 通过JSP的扩展标签库和JavaBeans功能,网站逻辑和网站界面可以完美地分离。 2) 使用Enterprise JavaBeans,可以轻松地在JSP开发的Web中实现事务、安全、会话等等企业级应用所需要的功能。 3) JDBC2.0提供了不同的数据库产品无关的数据库连接方式,更重要的是,数据库连接池提供了一种比普通的数据库连接方式效率高得多的连接方式。 JSP的语法基本上和Java是相同的,有Java基础的读者可以很快学会如何使用JSP,而没有Java语言基础的读者,只要循序渐进地阅读本书,一样可以成为JSP编程的高手。本书主要分为两个部分:第一部分为JSP基础部分。通过这一部分的学习,读者可以掌握JSP的基本使用方法,学会如何使虽JSP来开发一般的中、小型Web应用。这一部分使用常见的Apache Group的Tomcat作为JSP引擎的例子。第二部分为JSP高级应用部分。这一部分主要讲述如何使用JSP进行大型Web应用的开发,为了方便读者学习,本书还专门讲述了SUN公司的J2SDKEE和B趴公司的Webloglc应用服务器的基本使用方法。 JSP可以在各种操作系统和各种Web服务器下使用,其代码基本上不需要任何改动就可以使用。本书为了适应大多数读者的情况,使用了Windows操作系统作为例子,具体的试验平台如下: Windows 2000Advanced Server Apachel.3.14 Intemetln允rmation Server 5.0 Tomcat 3.1 J2SDKEE l.2 BEA Wiblogic 5.1 除了上述平台,书中的代码还在如下平台进行了测试: RedhatUnux 6.1 Apache l.3.12 TOmcat 3.1 BEA Weblogic 4.51 数据库系统主要使用了Microsoft SQL Server 7.0,部分代码使用了MySQL。作者 2000.11

2009-07-01

JSP高级编程03-PDF

JSP是一种如日中天的新型Internet/Intranet开发语言,可以在多种操作系统平台和多种Web服务器下使用。本书从最基础的JSP开发开始,循序渐进地介绍了JSP 开发技术,并涵盖了许多高级主题,如需要在企业级Web应用中使用的特性—Enterprise JavaBeans、JDBC 2.0、数据库连接池和自定义标签库。本书既适合初学者阅读,也适合具有一定JSP基础的开发人员深入研究使用。 前 言 JSP是SUN公司推出的一种新型的Internet/Intranet开发语言,和前一代Internet/Intranet开发语言(ASP、PHP)相比,JSP在以下几个方面有了重大的突破: 1) 通过JSP的扩展标签库和JavaBeans功能,网站逻辑和网站界面可以完美地分离。 2) 使用Enterprise JavaBeans,可以轻松地在JSP开发的Web中实现事务、安全、会话等等企业级应用所需要的功能。 3) JDBC2.0提供了不同的数据库产品无关的数据库连接方式,更重要的是,数据库连接池提供了一种比普通的数据库连接方式效率高得多的连接方式。 JSP的语法基本上和Java是相同的,有Java基础的读者可以很快学会如何使用JSP,而没有Java语言基础的读者,只要循序渐进地阅读本书,一样可以成为JSP编程的高手。本书主要分为两个部分:第一部分为JSP基础部分。通过这一部分的学习,读者可以掌握JSP的基本使用方法,学会如何使虽JSP来开发一般的中、小型Web应用。这一部分使用常见的Apache Group的Tomcat作为JSP引擎的例子。第二部分为JSP高级应用部分。这一部分主要讲述如何使用JSP进行大型Web应用的开发,为了方便读者学习,本书还专门讲述了SUN公司的J2SDKEE和B趴公司的Webloglc应用服务器的基本使用方法。 JSP可以在各种操作系统和各种Web服务器下使用,其代码基本上不需要任何改动就可以使用。本书为了适应大多数读者的情况,使用了Windows操作系统作为例子,具体的试验平台如下: Windows 2000Advanced Server Apachel.3.14 Intemetln允rmation Server 5.0 Tomcat 3.1 J2SDKEE l.2 BEA Wiblogic 5.1 除了上述平台,书中的代码还在如下平台进行了测试: RedhatUnux 6.1 Apache l.3.12 TOmcat 3.1 BEA Weblogic 4.51 数据库系统主要使用了Microsoft SQL Server 7.0,部分代码使用了MySQL。作者 2000.11

2009-07-01

JSP高级编程02-PDF

JSP是一种如日中天的新型Internet/Intranet开发语言,可以在多种操作系统平台和多种Web服务器下使用。本书从最基础的JSP开发开始,循序渐进地介绍了JSP 开发技术,并涵盖了许多高级主题,如需要在企业级Web应用中使用的特性—Enterprise JavaBeans、JDBC 2.0、数据库连接池和自定义标签库。本书既适合初学者阅读,也适合具有一定JSP基础的开发人员深入研究使用。 前 言 JSP是SUN公司推出的一种新型的Internet/Intranet开发语言,和前一代Internet/Intranet开发语言(ASP、PHP)相比,JSP在以下几个方面有了重大的突破: 1) 通过JSP的扩展标签库和JavaBeans功能,网站逻辑和网站界面可以完美地分离。 2) 使用Enterprise JavaBeans,可以轻松地在JSP开发的Web中实现事务、安全、会话等等企业级应用所需要的功能。 3) JDBC2.0提供了不同的数据库产品无关的数据库连接方式,更重要的是,数据库连接池提供了一种比普通的数据库连接方式效率高得多的连接方式。 JSP的语法基本上和Java是相同的,有Java基础的读者可以很快学会如何使用JSP,而没有Java语言基础的读者,只要循序渐进地阅读本书,一样可以成为JSP编程的高手。本书主要分为两个部分:第一部分为JSP基础部分。通过这一部分的学习,读者可以掌握JSP的基本使用方法,学会如何使虽JSP来开发一般的中、小型Web应用。这一部分使用常见的Apache Group的Tomcat作为JSP引擎的例子。第二部分为JSP高级应用部分。这一部分主要讲述如何使用JSP进行大型Web应用的开发,为了方便读者学习,本书还专门讲述了SUN公司的J2SDKEE和B趴公司的Webloglc应用服务器的基本使用方法。 JSP可以在各种操作系统和各种Web服务器下使用,其代码基本上不需要任何改动就可以使用。本书为了适应大多数读者的情况,使用了Windows操作系统作为例子,具体的试验平台如下: Windows 2000Advanced Server Apachel.3.14 Intemetln允rmation Server 5.0 Tomcat 3.1 J2SDKEE l.2 BEA Wiblogic 5.1 除了上述平台,书中的代码还在如下平台进行了测试: RedhatUnux 6.1 Apache l.3.12 TOmcat 3.1 BEA Weblogic 4.51 数据库系统主要使用了Microsoft SQL Server 7.0,部分代码使用了MySQL。作者 2000.11

2009-07-01

JSP高级编程01-PDF

JSP是一种如日中天的新型Internet/Intranet开发语言,可以在多种操作系统平台和多种Web服务器下使用。本书从最基础的JSP开发开始,循序渐进地介绍了JSP 开发技术,并涵盖了许多高级主题,如需要在企业级Web应用中使用的特性—Enterprise JavaBeans、JDBC 2.0、数据库连接池和自定义标签库。本书既适合初学者阅读,也适合具有一定JSP基础的开发人员深入研究使用。 前 言 JSP是SUN公司推出的一种新型的Internet/Intranet开发语言,和前一代Internet/Intranet开发语言(ASP、PHP)相比,JSP在以下几个方面有了重大的突破: 1) 通过JSP的扩展标签库和JavaBeans功能,网站逻辑和网站界面可以完美地分离。 2) 使用Enterprise JavaBeans,可以轻松地在JSP开发的Web中实现事务、安全、会话等等企业级应用所需要的功能。 3) JDBC2.0提供了不同的数据库产品无关的数据库连接方式,更重要的是,数据库连接池提供了一种比普通的数据库连接方式效率高得多的连接方式。 JSP的语法基本上和Java是相同的,有Java基础的读者可以很快学会如何使用JSP,而没有Java语言基础的读者,只要循序渐进地阅读本书,一样可以成为JSP编程的高手。本书主要分为两个部分:第一部分为JSP基础部分。通过这一部分的学习,读者可以掌握JSP的基本使用方法,学会如何使虽JSP来开发一般的中、小型Web应用。这一部分使用常见的Apache Group的Tomcat作为JSP引擎的例子。第二部分为JSP高级应用部分。这一部分主要讲述如何使用JSP进行大型Web应用的开发,为了方便读者学习,本书还专门讲述了SUN公司的J2SDKEE和B趴公司的Webloglc应用服务器的基本使用方法。 JSP可以在各种操作系统和各种Web服务器下使用,其代码基本上不需要任何改动就可以使用。本书为了适应大多数读者的情况,使用了Windows操作系统作为例子,具体的试验平台如下: Windows 2000Advanced Server Apachel.3.14 Intemetln允rmation Server 5.0 Tomcat 3.1 J2SDKEE l.2 BEA Wiblogic 5.1 除了上述平台,书中的代码还在如下平台进行了测试: RedhatUnux 6.1 Apache l.3.12 TOmcat 3.1 BEA Weblogic 4.51 数据库系统主要使用了Microsoft SQL Server 7.0,部分代码使用了MySQL。作者 2000.11

2009-07-01

CPPToolTip VC汽泡提示控件

CPPToolTip控件 链接:http://www.codeproject.com/KB/miscctrl/pptooltip.aspx 截图: 1 CPPToolTip控件介绍 Files Description PPTooltip.h PPTooltip.cpp CPPTooltip class PPHtmlDrawer.h PPHtmlDrawer.cpp CPPHtmlDrawer class. It's need to drawing HTML string in tooltip body PPDrawManager.h PPDrawManager.cpp CPPDrawManager class is a set of methods to work with graphics. CeXDib.h CeXDib.cpp CCeXDib class thanks to Davide Pizzolato and Davide Calabro. This class use for extend background's effect. Extend background effects by Davide Pizzolato and Davide Calabro become available if defined USE_SHADE: in PPDrawManager.h #define USE_SHADE 2 在普通窗体控件中使用 2.1 创建CPPToolTip对象 CPPToolTip m_tooltip; 2.2 在窗口初始化函数OnInitDialog中: // Create the CPPToolTip object m_tooltip.Create(this); 2.3 添加提示控件 m_tooltip.AddTool(GetDlgItem(IDC_BUTTON1), _T("Tooltip to the control IDC_BUTTON1")); 或者: m_tooltip.AddTool(this, _T("Tooltip for rectangle area"),CRect (100, 100, 200, 200)); 2.4 拦截处理鼠标消息 BOOL ... ::PreTranslateMessage(MSG* pMsg) { m_tooltip.RelayEvent(pMsg); } 3 在工具栏中使用 3.1 在CMainFrame中定义CPPToolTip对象 CPPToolTip m_tooltip; 3.2 在CMainFrame的OnCreate函数中创建CPPToolTip对象,添加工具栏提示 int CMainFrame::OnCreate(LPCREATESTRUCT lpCreateStruct) { ... m_tooltip.Create(this); //Adds tooltip for toolbar m_tooltip.AddToolBar(&m_wndToolBar); return 0; } 3.3 截取和处理鼠标消息 BOOL CMainFrame::PreTranslateMessage(MSG* pMsg) { m_tooltip.RelayEvent(pMsg); } 4 在菜单中使用 4.1 在CMainFrame中定义CPPToolTip变量 CPPToolTip m_tooltip; 4.2 在CMainFrame的OnCreate()函数中创建CPPToolTip对象 m_tooltip.Create(this); 4.3 Uncomments a line to enable a work with menus. in PPTooltip.h #define PPTOOLTIP_USE_MENU 4.4 为CMainFrame添加两个事件处理对象 //选中菜单事件 void CMainFrame::OnMenuSelect(UINT nItemID, UINT nFlags, HMENU hSubMenu) { m_tooltip.OnMenuSelect(nItemID, nFlags, hSubMenu); CFrameWnd::OnMenuSelect(nItemID, nFlags, hSubMenu); } //闲置状态事件 void CMainFrame::OnEnterIdle(UINT nWhy, CWnd* pWho) { m_tooltip.OnEnterIdle(nWhy, pWho); } 4.5 截取和处理鼠标消息 BOOL CMainFrame::PreTranslateMessage(MSG* pMsg) { m_tooltip.RelayEvent(pMsg); }

2009-06-19

VC之美化界面篇本文专题讨论VC中的界面美化,适用于具有中等VC水平的读者。读者最好具有以下VC基础:

VC之美化界面篇 作者:白乔 链接:http://vcer.net/1046595482643.html 本文专题讨论VC中的界面美化,适用于具有中等VC水平的读者。读者最好具有以下VC基础: 1. 大致了解MFC框架的基本运作原理; 2. 熟悉Windows消息机制,熟悉MFC的消息映射和反射机制; 3. 熟悉OOP理论和技术; 本文根据笔者多年的开发经验,并结合简单的例子一一展开,希望对读者有所帮助。 1 美化界面之开题篇 相信使用过《金山毒霸》、《瑞星杀毒》软件的读者应该还记得它们的精美界面: 图1 瑞星杀毒软件的精美界面 程序的功能如何如何强大是一回事,它的用户界面则是另一回事。千万不要忽视程序的用户界面,因为它是给用户最初最直接的印象,丑陋的界面、不友好的风格肯定会影响用户对软件程序的使用。 “受之以鱼,不若授之以渔”,本教程并不会向你推荐《瑞星杀毒软件》精美界面的具体实现,而只是向你推荐一些常用的美化方法。 2 美化界面之基础篇 美化界面需要先熟悉Windows下的绘图操作,并明白Windows的幕后绘图操作,才能有的放矢,知道哪些可以使用,知道哪些可以避免…… 2.1 Windows下的绘图操作 熟悉DOS的读者可能就知道:DOS下面的图形操作很方便,进入图形模式,整个屏幕就是你的了,你希望在哪画个点,那个地方就会出现一个点,红的、或者黄的,随你的便。你也可以花点时间画个按钮,画个你自己的菜单,等等…… Windows本身就是图形界面,所以Windows下面的绘图操作功能更丰富、简单。要了解Windows下的绘图操作,要实现Windows界面的美化,就必须了解MFC封装的设备环境类和图形对象类。 2.1.1 设备环境类 Windows下的绘图操作说到底就是DC操作。DC(Device Context设备环境)对象是一个抽象的作图环境,可能是对应屏幕,也可能是对应打印机或其它。这个环境是设备无关的,所以你在对不同的设备输出时只需要使用不同的设备环境就行了,而作图方式可以完全不变。这也就是Windows的设备无关性。 MFC的CDC类封装了Windows API 中大部分的画图函数。CDC的常见操作函数包括: Drawing-Attribute Functions:绘图属性操作,如:设置透明模式 Mapping Functions:映射操作 Coordinate Functions:坐标操作 Clipping Functions:剪切操作 Line-Output Functions:画线操作 Simple Drawing Functions:简单绘图操作,如:绘制矩形框 Ellipse and Polygon Functions:椭圆/多边形操作 Text Functions:文字输出操作 Printer Escape Functions:打印操作 Scrolling Functions:滚动操作 *Bitmap Functions:位图操作 *Region Functions:区域操作 *Font Functions:字体操作 *Color and Color Palette Functions:颜色/调色板操作 其中,标注*项会用到相应的图形对象类,参见2.1.2内容。 2.1.2 图形对象类 设备环境不足以包含绘图功能所需的所有绘图特征,除了设备环境外, Windows还有其他一些图形对象用来储存绘图特征。这些附加的功能包括从画线的宽度和颜色到画文本时所用的字体。图形对象类封装了所有六个图形对象。 下面的表格列出了MFC的图形对象类: MFC类 图形对象句柄 图形对象目的 CBitmap HBITMAP 内存中的位图 CBrush HBRUSH 画刷特性—填充某个图形时所使用的颜色和模式 CFont HFONT 字体特性—写文本时所使用的字体 CPalette HPALETTE 调色板颜色 CPen HPEN 画笔特性—画轮廓时所使用的线的粗细 CRgn HRGN 区域特性—包括定义它的点 表1 图形对象类和它们封装的句柄 使用CDC和图形对象类,在Windows里绘图还算是很简单的。观察以下的画面: 图2 使用CDC绘制出的按钮 该画面通过以下代码自行绘制的假按钮: BOOL CUi1View::PreCreateWindow(CREATESTRUCT& cs) { //设置背景色 //CBrush CUi1View::m_Back m_Back.CreateSolidBrush(::GetSysColor(COLOR_3DFACE)); cs.lpszClass = AfxRegisterWndClass(0, 0, m_Back, NULL); return CView::PreCreateWindow(cs); } int CUi1View::OnCreate(LPCREATESTRUCT lpCreateStruct) { if (CView::OnCreate(lpCreateStruct) == -1) return -1; //创建字体 //CFont CUi1View::m_Font m_Font.CreatePointFont(120, "Impact"); return 0; } void CUi1View::OnDraw(CDC* pDC) { //绘制按钮框架 pDC->DrawFrameControl(CRect(100, 100, 220, 160), DFC_BUTTON, DFCS_BUTTONPUSH); //输出文字 pDC->SetBkMode(TRANSPARENT); pDC->TextOut(120, 120, "Hello, CFan!"); } 呵呵,不好意思,这并不是真的Windows按钮,它只是一个假的空框子,当用户在按钮上点击鼠标时,放心,什么事情都不会发生。 2.2 Windows的幕后绘图操作 在Window中,如果所有的界面操作都由用户代码来实现,那将是一个很浩大的工程。笔者曾经在DOS设计过窗口图形界面,代码上千行,但实现的界面还是很古板、难看,除了我那个对编程一窍不通的女友,没有一个人欣赏它L;而且,更要命的是,操作系统,包括别的应用程序并不认识你的界面元素,这才是真正悲哀的。认识这些界面的只有你的程序,图2中的按钮永远只是一个无用的框子。 有了Windows,一切都好办了,Windows将诸如按钮、菜单、工具栏等等这些通用界面的绘制及动作都交给了系统,程序员就不用花心思再画那些按钮了,可以将更多的精力放在程序的功能实现方面。 所有的标准界面元素都被Windows封装好了。Windows知道怎么画你的菜单以及你的标注着“Hello, Cfan!”的按钮。当CFan某个快乐的小编(譬如:小飞)点击这个按钮的时候,Windows也明白按钮按下去的时候该有的模样,甚至,当这个友好的按钮获取焦点时,Windows也会不失时机地为它准备一个虚框…… 有利必有弊。你的不满这时候产生了:你既想使用Windows的True Button,可也嫌它的界面不够好看,譬如,你喜欢用蓝色的粗体表达你对CFan的无限情怀(正如图2那样)——人心不足,有办法吗?有的。 3 美化界面之实现篇 Windows还是给程序员留下了很多后门,通过一些途径还是可以美化界面的。本章节我们系统学习一下Windows界面美化的实现。 3.1 美化界面的途径 如何以合法的手段来达到美化界面的效果?一般美化界面的方法包括: 1. 使用MFC类的既有函数,设定界面属性; 2. 利用Windows的消息机制,截获有用的Windows的消息。通过MFC的消息映射(Message Mapping)和反射(Message Reflecting)机制,在Windows准备或者正在绘制该元素时,偷偷修改它的状态和行为,譬如:让按钮的边框为红色; 3. 利用MFC类的虚函数机制,重载有用的虚函数。在MFC框架调用该函数的时候,重新定义它的状态和行为; 一般来说,应用程序可以通过以下两种途径来实现以上的方法: 1. 在父窗口里,截获自身的或者由子元素(包括控件和菜单等元素)传递的关于界面绘制的消息; 2. 子类化子元素,或者为子元素准备一个新的类(一般来说该类必须继承于MFC封装的某个标准类,如:CButton)。在该子元素里,截获自身的或者从父窗口反射过来的关于界面绘制的消息。譬如:用户可以创建一个CXPButton类来实现具有XP风格的按钮,CXPButton继承于CButton。 对于应用程序,使用CXPButton类的途径相对于对话框窗口和普通窗口分成两种: ① 对话框窗口中,直接将原先绑定按钮的CButton类替换成CXPButton类,或者在绑定变量时直接指定Control类型为CXPButton,如图3所示: 图3 为按钮指定CXPButton类型 ②在普通窗口中,直接创建一个CXPButton类对象,然后在OnCreate()中调用CXPButton的Create方法; 以下的章节将综合地使用以上的方法,请读者朋友留心观察。 3.2 使用MFC类的既有函数 在界面美化的专题中,MFC也并非一无是处。MFC类对于界面美化也做了部分的努力,以下是一些可以使用的,参数说明略去。 CWinApp::SetDialogBkColor void SetDialogBkColor( COLORREF clrCtlBk = RGB(192, 192, 192), COLORREF clrCtlText = RGB(0, 0, 0) ); 指定对话框的背景色和文本颜色。 CListCtrl::SetBkColor CReBarCtrl::SetBkColor CStatusBarCtrl::SetBkColor CTreeCtrl::SetBkColor COLORREF SetBkColor( COLORREF clr ); 设定背景色。 CListCtrl::SetTextColor CReBarCtrl::SetTextColor CTreeCtrl::SetTextColor COLORREF SetTextColor( COLORREF clr ); 设定文本颜色。 CListCtrl::SetBkImage BOOL SetBkImage( LVBKIMAGE* plvbkImage ); BOOL SetBkImage( HBITMAP hbm, BOOL fTile = TRUE, int xOffsetPercent = 0, int yOffsetPercent = 0); BOOL SetBkImage( LPTSTR pszUrl, BOOL fTile = TRUE, int xOffsetPercent = 0, int yOffsetPercent = 0 ); 设定列表控件的背景图片。 CComboBoxEx::SetExtendedStyle CListCtrl::SetExtendedStyle CTabCtrl::SetExtendedStyle CToolBarCtrl::SetExtendedStyle DWORD SetExtendedStyle( DWORD dwExMask, DWORD dwExStyles ); 设置控件的扩展属性,例如:设置列表控件属性带有表格线。 图4是个简单应用MFC类的既有函数来改善Windows界面的例子: 图4 使用MFC类的既有函数美化界面 相关实现代码如下: BOOL CUi2App::InitInstance() { //… //设置对话框背景色和字体颜色 SetDialogBkColor(RGB(128, 192, 255), RGB(0, 0, 255)); //… } BOOL CUi2Dlg::OnInitDialog() { //… //设置列表控件属性带有表格线 DWORD NewStyle = m_List.GetExtendedStyle(); NewStyle |= LVS_EX_GRIDLINES; m_List.SetExtendedStyle(NewStyle); //设置列表控件字体颜色为红色 m_List.SetTextColor(RGB(255, 0, 0)); //填充数据 m_List.InsertColumn(0, "QQ", LVCFMT_LEFT, 100); m_List.InsertColumn(1, "昵称", LVCFMT_LEFT, 100); m_List.InsertItem(0, "5854165"); m_List.SetItemText(0, 1, "白乔"); m_List.InsertItem(1, "6823864"); m_List.SetItemText(1, 1, "Satan"); //… } 嗯,这样的界面还算不错吧? 3.3 使用Windows的消息机制 使用MFC类的既有函数来美化界面,其功能是有限的。既然Windows是通过消息机制进行通讯的,那么我们就可以通过截获一些有用的消息来美化我们的界面,以下是一些有用的Windows消息: WM_PAINT WM_ERASEBKGND WM_CTLCOLOR* WM_DRAWITEM* WM_MEASUREITEM* NM_CUSTOMDRAW* 注意,标注*的消息是子元素发送给父窗口的通知消息,其它的为窗口或者子元素自身的消息。 3.3.1 WM_PAINT WM_PAINT消息相信大家都很熟悉,一个窗口要重绘了,就会有一个WM_PAINT消息发送给窗口。 可以响应窗口的WM_PAINT,以更改它们的模样。WM_PAINT的映射函数原型如下: afx_msg void OnPaint(); 控件也是窗口,所以控件也有WM_PAINT消息,通过消息映射我们完全可以定义控件的界面。如图5所示: 图5 利用WM_ PAINT消息美化界面 实现代码也很简单: void CLazyStatic::OnPaint() { CPaintDC dc(this); // device context for painting //什么都不输出,仅仅画一个矩形框 CRect rc; GetClientRect(&rc); dc.Rectangle(rc); } 哈哈,简单吧?不过WM_PAINT确实绝了点,它要求应用程序完成元素界面的所有绘制过程,想象一下如何画出一个完整的列表控件?太烦了吧。一般来说,很少有人喜欢使用WM_PAINT,还有其它更细致的消息。 3.3.2 WM_ERASEBKGND Windows在向窗口发送WM_PAINT消息之前,总会发送一个WM_ERASEBKGND消息通知该窗口擦除背景,默认情况下,Windows将以窗口的背景色清除该窗口。 可以响应窗口(包括子元素)的WM_ERASEBKGND,以更改它们的背景。WM_ERASEBKGND的映射函数原型如下: afx_msg BOOL OnEraseBkgnd( CDC* pDC ); 返回值: 指定背景是否已清除,如果为FALSE,系统将自动清除 参数: pDC指定了绘制操作所使用的设备环境。 图6是个简单的例子,通过OnEraseBkgnd为对话框加载了一副位图背景: 图6 利用WM_ ERASEBKGND消息美化界面 实现代码也很简单: BOOL CUi4Dlg::OnInitDialog() { //… //加载位图 //CBitmap m_Back; m_Back.LoadBitmap(IDB_BACK); //… } BOOL CUi4Dlg::OnEraseBkgnd(CDC* pDC) { CDC dc; dc.CreateCompatibleDC(pDC); dc.SelectObject(&m_Back); //获取BITMAP对象 BITMAP hb; m_Back.GetBitmap(&hb); //获取窗口大小 CRect rt; GetClientRect(&rt); //显示位图 pDC->StretchBlt(0, 0, rt.Width(), rt.Height(), &dc, 0, 0, hb.bmWidth, hb.bmHeight, SRCCOPY); return TRUE; } HBRUSH CUi4Dlg::OnCtlColor(CDC* pDC, CWnd* pWnd, UINT nCtlColor) { //设置透明背景模式 pDC->SetBkMode(TRANSPARENT); //设置背景刷子为空 return (HBRUSH)::GetStockObject(HOLLOW_BRUSH); } 同时别忘了响应OnCtlColor,否则窗口里面的控件就不透明了。OnCtlColor的内容。 3.3.3 WM_CTLCOLOR 在控件显示之前,每一个控件都会向父对话框发送一个WM_CTLCOLOR消息要求获取绘制所需要的颜色。WM_CTLCOLOR消息缺省处理函数CWnd::OnCtlColor返回一个HBRUSH类型的句柄,这样,就可以设置前景和背景文本颜色,并为控件或者对话框的非文本区域选定一个刷子。 WM_CTLCOLOR的映射函数原型如下: afx_msg HBRUSH OnCtlColor( CDC* pDC, CWnd* pWnd, UINT nCtlColor ); 返回值: 用以指定背景的刷子 参数: pDC指定了绘制操作所使用的设备环境。 pWnd 控件指针 nCtlColor 指定控件类型,其取值如表2所示: 类型值 含义 CTLCOLOR_BTN 按钮控件 CTLCOLOR_DLG 对话框 CTLCOLOR_EDIT 编辑控件 CTLCOLOR_LISTBOX 列表框 CTLCOLOR_MSGBOX 消息框 CTLCOLOR_SCROLLBAR 滚动条 CTLCOLOR_STATIC 静态控件 表2 nCtlColor的类型值与含义 作为一个简单的例子,观察以下的代码: BOOL CUi5Dlg::OnInitDialog() { //… //创建字体 //CFont CUi1View::m_Font1, CUi1View::m_Font2 m_Font1.CreatePointFont(120, "Impact"); m_Font3.CreatePointFont(120, "Arial"); return TRUE; // return TRUE unless you set the focus to a control } HBRUSH CUi5Dlg::OnCtlColor(CDC* pDC, CWnd* pWnd, UINT nCtlColor) { HBRUSH hbr = CDialog::OnCtlColor(pDC, pWnd, nCtlColor); if(nCtlColor == CTLCOLOR_STATIC) { //区分静态控件 switch(pWnd->GetDlgCtrlID()) { case IDC_STATIC1: { pDC->SelectObject(&m_Font1); pDC->SetTextColor(RGB(0, 0, 255)); break; } case IDC_STATIC2: { pDC->SelectObject(&m_Font2); pDC->SetTextColor(RGB(255, 0, 0)); break; } } } return hbr; } 生成的界面如下: 图7 利用WM_CTLCOLOR消息美化界面 3.3.4 WM_DRAWITEM OnCtlColor只能修改元素的颜色,但不能修改元素的界面框架,WM_DRAWITEM则可以。 当一个具有Owner draw风格的元素(包括按钮、组合框、列表框和菜单等)需要显示外观时,该元素会发送一条WM_DRAWITEM消息至它的隶属窗口(Owner)。 WM_DRAWITEM的映射函数原型如下: afx_msg void OnDrawItem( int nIDCtl, LPDRAWITEMSTRUCT lpDrawItemStruct ); 参数: nIDCtl 该控件的ID,如果该元素为菜单,则nIDCtl为0 lpDrawItemStruct 指向DRAWITEMSTRUCT结构对象的指针,DRAWITEMSTRUCT的结构定义如下: typedef struct tagDRAWITEMSTRUCT { UINT CtlType; UINT CtlID; UINT itemID; UINT itemAction; UINT itemState; HWND hwndItem; HDC hDC; RECT rcItem; DWORD itemData; }DRAWITEMSTRUCT; CtlType指定了控件的类型,其取值如表3所示: 类型值 含义 ODT_BUTTON 按钮控件 ODT_COMBOBOX 组合框控件 ODT_LISTBOX 列表框控件 ODT_LISTVIEW 列表视图 ODT_MENU 菜单项 ODT_STATIC 静态文本控件 ODT_TAB Tab控件 表3 CtlType的类型值与含义 CtlID 指定自绘控件的ID值,该成员不适用于菜单项 itemID表示菜单项ID,也可以表示列表框或者组合框中某项的索引值。对于一个空的列表框或组合框,该成员的值为?C1。这时应用程序只绘制焦点矩形(该矩形的坐标由rcItem 成员给出)虽然此时控件中没有需要显示的项,但是绘制焦点矩形还是很有必要的,因为这样做能够提示用户该控件是否具有输入焦点。当然也可以设置itemAction 成员为合适值,使得无需绘制焦点。 itemAction 指定绘制行为,其取值为表4中所示值的一个或者多个的联合: 类型值 含义 ODA_DRAWENTIRE 当整个控件都需要被绘制时,设置该值。 ODA_FOCUS 如果控件需要在获得或失去焦点时被绘制,则设置该值。此时应该检查itemState成员,以确定控件是否具有输入焦点。 ODA_SELECT 如果控件需要在选中状态改变时被绘制,则设置该值。此时应该检查itemState 成员,以确定控件是否处于选中状态。 表4 itemAction的类型值与含义 itemState 指定了当前绘制项的状态。例如,如果菜单项应该被灰色显示,则可以指定ODS_GRAYED状态标志。其取值为表5中所示值的一个或者多个的联合: 类型值 含义 ODS_CHECKED 标记状态,仅适用于菜单项。 ODS_DEFAULT 默认状态。 ODS_DISABLED 禁止状态。 ODS_FOCUS 焦点状态。 ODS_GRAYED 灰化状态,仅适用于菜单项。 ODS_SELECTED 选中状态。 ODS_HOTLIGHT 仅适用于Windows 98/Me/Windows 2000/XP,热点状态:如果鼠标指针位于控件之上,则设置该值,这时控件会显示高亮颜色。 ODS_INACTIVE 仅适用于Windows 98/Me/Windows 2000/XP,非激活状态。 ODS_NOACCEL 仅适用于Windows 2000/XP,控件是否有快速键。 ODS_COMBOBOXEDIT 在自绘组合框控件中只绘制选择区域。 ODS_NOFOCUSRECT 仅适用于Windows 2000/XP,不绘制捕获焦点的效果。 表5 itemState的类型值与含义 hwndItem 指定了组合框、列表框和按钮等自绘控件的窗口句柄;如果自绘的对象为菜单项,则表示包含该菜单项的菜单句柄。 hDC 指定了绘制操作所使用的设备环境。 rcItem 指定了将被绘制的矩形区域。这个矩形区域就是上面hDC的作用范围。系统会自动裁剪组合框、列表框或按钮等控件的自绘制区域以外的部分。也就是说rcItem中的坐标点(0,0)指的就是控件的左上角。但是系统不裁剪菜单项,所以在绘制菜单项的时候,必须先通过一定的换算得到该菜单项的位置,以保证绘制操作在我们希望的区域中进行。 itemData 对于菜单项,该成员的取值为由CMenu::AppendMenu、CMenu::InsertMenu、CMenu::ModifyMenu等函数传递给菜单的值。 对于列表框或这组合框,该成员的取值为由ComboBox::AddString、CComboBox::InsertString、CListBox::AddString或者CListBox::InsertString等函数传递给控件的值。 如果ctlType 的取值是ODT_BUTTON或者ODT_STATIC,itemData的取值为0。 图5是个相应的例子,它修改了按钮的界面: 图8 利用WM_DRAWITEM消息美化界面 实现代码如下: BOOL CUi6Dlg::OnInitDialog() { //… //创建字体 //CFont CUi1View::m_Font m_Font.CreatePointFont(120, "Impact"); //… } void CUi6Dlg::OnDrawItem(int nIDCtl, LPDRAWITEMSTRUCT lpDrawItemStruct) { if(nIDCtl == IDC_HELLO_CFAN) { //绘制按钮框架 UINT uStyle = DFCS_BUTTONPUSH; //是否按下去了? if (lpDrawItemStruct->itemState & ODS_SELECTED) uStyle |= DFCS_PUSHED; CDC dc; dc.Attach(lpDrawItemStruct->hDC); dc.DrawFrameControl(&lpDrawItemStruct->rcItem, DFC_BUTTON, uStyle); //输出文字 dc.SelectObject(&m_Font); dc.SetTextColor(RGB(0, 0, 255)); dc.SetBkMode(TRANSPARENT); CString sText; m_HelloCFan.GetWindowText(sText); dc.TextOut(lpDrawItemStruct->rcItem.left + 20, lpDrawItemStruct->rcItem.top + 20, sText); //是否得到焦点 if(lpDrawItemStruct->itemState & ODS_FOCUS) { //画虚框 CRect rtFocus = lpDrawItemStruct->rcItem; rtFocus.DeflateRect(3, 3); dc.DrawFocusRect(&rtFocus); } return; } CDialog::OnDrawItem(nIDCtl, lpDrawItemStruct); } 别忘了标记Owner draw属性: 图9 指定按钮的Owner draw属性 值得一提的是,CWnd内部截获了WM_DRAWITEM、WM_MEASUREITEM等消息,并映射成子元素的相应虚函数的调用,如CButton::DrawItem()。所以,以上例子也可以通过派生出一个CButton的派生类,并重载该类的DrawItem()函数来实现。使用虚函数机制实现界面美化参见3.4章节。 3.3.5 WM_MEASUREITEM 仅仅WM_DRAWITEM还是不够的,对于一些特殊的控件,如ListBox,系统在发送WM_DRAWITEM消息前,还发送WM_MEASUREITEM消息,需要你设置ListBox中每个项目的高度。 WM_DRAWITEM的映射函数原型如下: afx_msg void OnMeasureItem( int nIDCtl, LPMEASUREITEMSTRUCT lpMeasureItemStruct ); nIDCtl 该控件的ID,如果该元素为菜单,则nIDCtl为0 lpMeasureItemStruct指向MEASUREITEMSTRUCT结构对象的指针,MEASUREITEMSTRUCT的结构定义如下: typedef struct tagMEASUREITEMSTRUCT { UINT CtlType; UINT CtlID; UINT itemID; UINT itemWidth; UINT itemHeight; DWORD itemData } MEASUREITEMSTRUCT; CtlType指定了控件的类型,其取值如表6所示: 类型值 含义 ODT_COMBOBOX 组合框控件 ODT_LISTBOX 列表框控件 ODT_MENU 菜单项 表6 CtlType的类型值与含义 CtlID 指定自绘控件的ID值,该成员不适用于菜单项 itemID表示菜单项ID,也可以表示可变高度的列表框或组合框中某项的索引值。该成员不适用于固定高度的列表框或组合框。 itemWidth 指定菜单项的宽度 itemHeight指定菜单项或者列表框中某项的的高度,最大值为255 itemData 对于菜单项,该成员的取值为由CMenu::AppendMenu、CMenu::InsertMenu、CMenu::ModifyMenu等函数传递给菜单的值。 对于列表框或这组合框,该成员的取值为由ComboBox::AddString、CComboBox::InsertString、CListBox::AddString或者CListBox::InsertString等函数传递给控件的值。 图示出了OnMeasureItem的效果: 图10 利用WM_MEASUREITEM消息美化界面 相应的OnMeasureItem()实现如下: void CUi7Dlg::OnMeasureItem(int nIDCtl, LPMEASUREITEMSTRUCT lpMeasureItemStruct) { if(nIDCtl == IDC_COLOR_PICKER) { //设定高度为 lpMeasureItemStruct->itemHeight = 30; return; } CDialog::OnMeasureItem(nIDCtl, lpMeasureItemStruct); } 同样别忘了指定列表框的Owner draw属性: 图11 指定下拉框的Owner draw属性 3.3.6 NM_CUSTOMDRAW 大家也许熟悉WM_NOTIFY,控件通过WM_NOTIFY向父窗口发送消息。在WM_NOTIFY消息体中,部分控件会发送NM_CUSTOMDRAW告诉父窗口自己需要绘图。 可以反射NM_CUSTOMDRAW消息,如: ON_NOTIFY_REFLECT(NM_CUSTOMDRAW, OnCustomDraw) afx_msg void OnCustomDraw(NMHDR *pNMHDR, LRESULT *pResult); 参数: pNMHDR 说到底只是一个指针,大多数情况下它指向一个NMHDR结构对象,NMHDR结构如下: typedef struct tagNMHDR { HWND hwndFrom; UINT idFrom; UINT code; } NMHDR; 其中: hwndFrom 发送方控件的窗口句柄 idFrom 发送方控件的ID code 通知代码 对于某些控件来说,pNMHDR则会解释成其它内容更丰富的结构对象的指针,如:对于列表控件来说,pNMHDR常常指向一个NMCUSTOMDRAW对象,NMCUSTOMDRAW结构如下: typedef struct tagNMCUSTOMDRAWINFO { NMHDR hdr; DWORD dwDrawStage; HDC hdc; RECT rc; DWORD dwItemSpec; UINT uItemState; LPARAM lItemlParam; } NMCUSTOMDRAW, FAR * LPNMCUSTOMDRAW; hdr NMHDR对象 dwDrawStage 当前绘制状态,其取值如表7所示: 类型值 含义 CDDS_POSTERASE 擦除循环结束 CDDS_POSTPAINT 绘制循环结束 CDDS_PREERASE 准备开始擦除循环 CDDS_PREPAINT 准备开始绘制循环 CDDS_ITEM 指定dwItemSpec, uItemState, lItemlParam参数有效 CDDS_ITEMPOSTERASE 列表项擦除结束 CDDS_ITEMPOSTPAINT 列表项绘制结束 CDDS_ITEMPREERASE 准备开始列表项擦除 CDDS_ITEMPREPAINT 准备开始列表项绘制 CDDS_SUBITEM 指定列表子项 表7 dwDrawStage的类型值与含义 hdc指定了绘制操作所使用的设备环境。 rc指定了将被绘制的矩形区域。 dwItemSpec 列表项的索引 uItemState 当前列表项的状态,其取值如表8所示: 类型值 含义 CDIS_CHECKED 标记状态。 CDIS_DEFAULT 默认状态。 CDIS_DISABLED 禁止状态。 CDIS_FOCUS 焦点状态。 CDIS_GRAYED 灰化状态。 CDIS_SELECTED 选中状态。 CDIS_HOTLIGHT 热点状态。 CDIS_INDETERMINATE 不定状态。 CDIS_MARKED 标注状态。 表8 uItemState的类型值与含义 lItemlParam 当前列表项的绑定数据 pResult 指向状态值的指针,指定系统后续操作,依赖于dwDrawStage: 当dwDrawStage为CDDS_PREPAINT,pResult含义如表9所示: 类型值 含义 CDRF_DODEFAULT 默认操作,即系统在列表项绘制循环过程不再发送NM_CUSTOMDRAW。 CDRF_NOTIFYITEMDRAW 指定列表项绘制前后发送消息。 CDRF_NOTIFYPOSTERASE 列表项擦除结束时发送消息。 CDRF_NOTIFYPOSTPAINT 列表项绘制结束时发送消息。 表9 pResult的类型值与含义(一) 当dwDrawStage为CDDS_ITEMPREPAINT,pResult含义如表10所示: 类型值 含义 CDRF_NEWFONT 指定后续操作采用应用中指定的新字体。 CDRF_NOTIFYSUBITEMDRAW 列表子项绘制时发送消息。 CDRF_SKIPDEFAULT 系统不必再绘制该子项。 表10 pResult的类型值与含义(二) 以下是一个利用NM_CUSTOMDRAW消息绘制出的多色列表框的例子: 图12 利用NM_CUSTOMDRAW消息美化界面 对应代码如下: void CCoolList::OnCustomDraw(NMHDR *pNMHDR, LRESULT *pResult) { //类型安全转换 NMLVCUSTOMDRAW* pLVCD = reinterpret_cast<NMLVCUSTOMDRAW*>(pNMHDR); *pResult = 0; //指定列表项绘制前后发送消息 if(CDDS_PREPAINT == pLVCD->nmcd.dwDrawStage) { *pResult = CDRF_NOTIFYITEMDRAW; } else if(CDDS_ITEMPREPAINT == pLVCD->nmcd.dwDrawStage) { //奇数行 if(pLVCD->nmcd.dwItemSpec % 2) pLVCD->clrTextBk = RGB(255, 255, 128); //偶数行 else pLVCD->clrTextBk = RGB(128, 255, 255); //继续 *pResult = CDRF_DODEFAULT; } } 注意到上例采取了3.1所推荐的第2种实现方法,派生了一个新类CCoolList。 3.4 使用MFC类的虚函数机制 修改Windows界面,除了从Windows消息机制下功夫,也可以从MFC类下功夫,这应该得益于类的虚函数机制。为了防止诸如“面向对象技术”等术语在此泛滥,以下仅举一段代码作为例子: void CView::OnPaint() { // standard paint routine CPaintDC dc(this); OnPrepareDC(&dc); OnDraw(&dc); } 这是MFC中viewcore.cpp中的源代码,很多读者总不明白OnDraw()和OnPaint()之间的关系,从以上的代码中很容易看出,CView的WM_PAINT消息响应函数OnPaint()会自动调用CView::OnDraw()。而作为开发者的用户,可以通过简单的OnDraw()的重载实现对WM_PAINT的处理。所以说,对MFC类的虚函数的重载是对消息机制的扩展。 以下列出了与界面美化相关的虚函数,参数说明略去: CButton::DrawItem CCheckListBox::DrawItem CComboBox::DrawItem CHeaderCtrl::DrawItem CListBox::DrawItem CMenu::DrawItem CStatusBar::DrawItem CStatusBarCtrl::DrawItem CTabCtrl::DrawItem virtual void DrawItem( LPDRAWITEMSTRUCT lpDrawItemStruct ); Owner draw元素自绘函数 很显然,位图菜单都是通过这个DrawItem画出来的。限于篇幅,在此不再附以例程。

2009-06-17

Install Shield安装程序制作图解

本文介绍了利用Install Shield制作应用程序安装软件的方法。在文中作者除了对常用的一些技术进行介绍外,还对安装过程位图的显示、标题和背景的定制等高级技术作了简要的阐述,本文所述方法能够满足大多数安装软件的制作需求。 InstallShield是一种非常成功的应用软件安装程序制作工具,以其功能强大、灵活性好、容易扩展和强大的网络支持而著称,并因此成为目前最为流行的安装程序专业制作工具之一。该软件不仅提供了灵活方便的向导支持,也允许用户通过其内建的脚本语言InstallScript来对整个安装过程在代码级上进行修改,可以象VC等高级语言一样对安装过程进行精确控制。InstallShield也是Visual C++附带的一个安装程序制作工具,在VC安装结束前将会询问用户是否安装Install Shiled工具,如果当时没有安装,也可以在使用时单独从VC安装盘进行安装。本文将结合一个具体的实例来对InstallShield的使用做一个较为全面的介绍,使读者能够初步掌握使用InstallShield制作专业水准的安装程序。 运行InstallShield将会出现如图1所示的程序界面,与使用其他高级语言开发程序一样,首先要建立一个工程。

2009-06-02

最新名企标准通用C++面试题,

C++面试题 参考:http://blog.csdn.net/Ghost90/archive/2009/04/22/4099672.aspx 整理:松鼠 时间:2009-5-8 1、const 有什么用途?(请至少说明两种) 答: (1)可以定义 const 常量 (2)const可以修饰函数的参数、返回值,甚至函数的定义体。被const修饰的东西都受到强制保护,可以预防意外的变动,能提高程序的健壮性。 2、在C++ 程序中调用被 C编译器编译后的函数,为什么要加 extern “C”? 答:C++语言支持函数重载,C语言不支持函数重载。函数被C++编译后在库中的名字与C语言的不同。假设某个函数的原型为: void foo(int x, int y); 该函数被C编译器编译后在库中的名字为_foo,而C++编译器则会产生像_foo_int_int之类的名字。 C++提供了C连接交换指定符号extern“C”来解决名字匹配问题。 3、请简述以下两个for循环的优缺点(5分) for (i=0; i<N; i++) { if (condition) DoSomething(); else DoOtherthing(); } if (condition) { for (i=0; i<N; i++) DoSomething(); } else { for (i=0; i<N; i++) DoOtherthing(); } 优点:程序简洁 缺点:多执行了N-1次逻辑判断,并且打断了循环“流水线”作业,使得编译器不能对循环进行优化处理,降低了效率。 优点:循环的效率高 缺点:程序不简洁 4、有关内存的思考题 void GetMemory(char *p) { p = (char *)malloc(100); } void Test(void) { char *str = NULL; GetMemory(str); strcpy(str, "hello world"); printf(str); } 请问运行Test函数会有什么样的结果? 答:程序崩溃。 因为GetMemory并不能传递动态内存, Test函数中的 str一直都是 NULL。 strcpy(str, "hello world");将使程序崩溃。 char *GetMemory(void) { char p[] = "hello world"; return p; } void Test(void) { char *str = NULL; str = GetMemory(); printf(str); } 请问运行Test函数会有什么样的结果? 答:可能是乱码。 因为GetMemory返回的是指向“栈内存”的指针,该指针的地址不是 NULL,但其原现的内容已经被清除,新内容不可知。 void GetMemory2(char **p, int num) { *p = (char *)malloc(num); } void Test(void) { char *str = NULL; GetMemory(&str, 100); strcpy(str, "hello"); printf(str); } 请问运行Test函数会有什么样的结果? 答: (1)能够输出hello (2)内存泄漏 void Test(void) { char *str = (char *) malloc(100); strcpy(str, “hello”); free(str); if(str != NULL) { strcpy(str, “world”); printf(str); } } 请问运行Test函数会有什么样的结果? 答:篡改动态内存区的内容,后果难以预料,非常危险。 因为free(str);之后,str成为野指针, if(str != NULL)语句不起作用。 5、编写strcpy函数(10分) 已知strcpy函数的原型是 char *strcpy(char *strDest, const char *strSrc); 其中strDest是目的字符串,strSrc是源字符串。 (1)不调用C++/C的字符串库函数,请编写函数 strcpy char *strcpy(char* strDest, const char* strSrc) { assert((strDest!=NULL) && (strSrc !=NULL));//2分 char *address = strDest;//2分 while( (*strDest++ = * strSrc++) != ‘\0’ )//2分 NULL; return address;// 2分 } 5.1 strcpy能把strSrc的内容复制到strDest,为什么还要char * 类型的返回值? 答:为了实现链式表达式。 // 2分 例如int length = strlen( strcpy( strDest, "hello world") ); 6、编写类String的构造函数、析构函数和赋值函数(25分) 已知类String的原型为: class String { public: String(const char *str = NULL);//普通构造函数 String(const String &other);//拷贝构造函数 ~ String(void);// 析构函数 String & operate =(const String &other);// 赋值函数 private: char* m_data;// 用于保存字符串 }; 请编写String的上述4个函数。 标准答案: // String的析构函数 String::~String(void) // 3分 { delete [] m_data; // 由于m_data是内部数据类型,也可以写成delete m_data; } // String的普通构造函数 String::String(const char *str) // 6分 { if(str==NULL) { m_data = new char[1]; // 若能加NULL 判断则更好 *m_data = ‘\0’; } else { int length = strlen(str); m_data = new char[length+1]; // 若能加NULL 判断则更好 strcpy(m_data, str); } } // 拷贝构造函数 String::String(const String &other) // 3分 { int length = strlen(other.m_data); m_data = new char[length+1]; // 若能加NULL 判断则更好 strcpy(m_data, other.m_data); } // 赋值函数 String & String::operate =(const String &other) // 13分 { // (1) 检查自赋值 // 4分 if(this == &other) return *this; // (2) 释放原有的内存资源 // 3分 delete [] m_data; // ()分配新的内存资源,并复制内容// 3分 int length = strlen(other.m_data); m_data = new char[length+1]; // 若能加NULL 判断则更好 strcpy(m_data, other.m_data); // ()返回本对象的引用 // 3分 return *this; } 7、实现双向链表删除一个节点P,在节点P后插入一个节点,写出这两个函数。 void DeleteNode(DuNode *p) { p->prior->next=p->next; p->next->prior=p->prior; } void InsertNode(DuNode *p, DuNode *s)//Node "s" is inserted after "p" { s->next=p->next; p->next->prior=s; p->next=s; s->prior=p; } 8、Windows程序的入口是哪里?写出Windows消息机制的流程。 WINDOWS入口是WinMain函数 消息机制的流程: 系统中发生了某个事件 Windows把这个事件翻译为消息,然后把它放到消息队列中 1. 应用程序从消息队列中接收到这个消息,把它存放在TMsg记录中 2. 应用程序把消息传递给一个适当的窗口的窗口过程 3. 窗口过程响应这个消息并进行处理 9.写一个函数,将其中的\t都转换成4个空格。 #include<iostream> using namespace std; char* Convert_t(char *des,char *src) { char *temp; des=new char[100]; temp=des; while(*src!='\0') { if(*src=='\t') { src++; *des++=' '; *des++=' '; *des++=' '; *des++=' '; continue; } *des++=*src++; } *des='\0'; des=temp; return des; } int main() { char *t="asdf\tasd\tasasddas\\tdfasdf",*d; cout<<t<<endl; cout<<Convert_t(d,t); getchar(); } 10.如何定义和实现一个类的成员函数为回调函数? 如果类的成员函数是一个callback函数, 必须宣告它为"static",才能把C++ 编译器加诸于函数的一个隐藏参数this去掉。 11.C++里面是不是所有的动作都是main()引起的?如果不是,请举例。 不是的,C++里面有些动作不是引起的,比如,全局对象的实例化、全局变量的动态空间申请,等等 下面是一个例子: #include<iostream> using namespace std; char *des=new char[100]; //全局变量的动态空间申请在程序运行之后,main运行之前完成。所以不是所有的动作都是main引起的。 int main() { char *des="abc"; cout<<des<<endl; getchar(); } 12.C++里面如何声明const void f(void)函数为C程序中的库函数? extern "C" const void f(void); 这样声明之后,相当于告诉C, 函数const void f(void)是在C++语言的文件中声明或者实现的,c程序可以使用这个C++中的函数了,从而实现C++和c的混合编程。 13、编写一个函数,作用是把一个char组成的字符串循环右移n个。比如原来是“abcdefghi”如果n=2,移位后应该是“hiabcdefgh” 正确解答1: void LoopMove(char* pStr, int steps) { int n = strlen( pStr ) - steps; char tmp[MAX_LEN]; strcpy ( tmp, pStr + n ); strcpy ( tmp + steps, pStr); *( tmp + strlen ( pStr ) ) = '\0'; strcpy( pStr, tmp ); } 正确解答2: void LoopMove(char* pStr,int steps ) { int n = strlen( pStr ) - steps; char tmp[MAX_LEN]; memcpy( tmp, pStr + n, steps ); memcpy(pStr + steps, pStr, n ); memcpy(pStr, tmp, steps ); } 14、写出输出结果 void fun(char s[10]) { char a[10]; cout<<"a:"<<sizeof(a)<<endl; cout<<"s:"<<sizeof(s)<<endl; } 输出: a:10 s:4 15、内存的分配方式的分配方式有几种? 答: 1. 从静态存储区域分配。内存在程序编译的时候就已经分配好,这块内存在程序的整个运行期间都存在。例如全局变量。 2. 在栈上创建。在执行函数时,函数内局部变量的存储单元都可以在栈上创建,函数执行结束时这些存储单元自动被释放。栈内存分配运算内置于处理器的指令集中,效率很高,但是分配的内存容量有限。 3. 从堆上分配,亦称动态内存分配。程序在运行的时候用malloc或new申请任意多少的内存,程序员自己负责在何时用free或delete释放内存。动态内存的生存期由我们决定,使用非常灵活,但问题也最多。 16.是不是一个父类写了一个virtual 函数,如果子类覆盖它的函数不加virtual ,也能实现多态? 答:virtual修饰符会被隐形继承的。private 也被集成,只事派生类没有访问权限而已。virtual可加可不加。子类的空间里有父类的所有变量(static除外)。同一个函数只存在一个实体(inline除外)。子类覆盖它的函数不加virtual ,也能实现多态。在子类的空间里,有父类的私有变量。私有变量不能直接访问。 17.进程间通信的方式有? 进程间通信的方式有 共享内存, 管道 ,Socket ,消息队列 , DDE等 18.C++中什么数据分配在栈或堆中,New分配数据是在近堆还是远堆中? 答:栈: 存放局部变量,函数调用参数,函数返回值,函数返回地址。由系统管理 堆: 程序运行时动态申请,new 和 malloc申请的内存就在堆上 (Google搜):DOS下程序是独占方式,堆分为近堆和远堆,近堆和栈是在数据段开辟的同一块内存地址,栈从下往上增长,堆从上向下分配,中间没有规定分界线,所以程序控制不当,如深层次的递归,大量的动态地址分配很容易造成堆栈冲突,即堆栈地址重叠,从而造成死机和程序运行异常。堆和栈连在一起说的原因就是如此。至于远堆则是指在数据段和代码段以外计算机所有没有使用的剩余基本内存。Windows采用的是虚拟地址,内存分配方式就不一样了 补充:DOS下堆栈的分配是由程序而不是操作系统自己控制的,具体分配大小、方式随编译系统、程序模式不同而异。C语言的堆栈分配代码在启动代码中. 19.全局变量和局部变量在内存中是否有区别?如果有,是什么区别? 全局变量储存在静态数据库,局部变量在堆栈。 20.TCP/IP 建立连接的过程?(3-way shake) 答:在TCP/IP协议中,TCP协议提供可靠的连接服务,采用三次握手建立一个连接。 第一次握手:建立连接时,客户端发送syn包(syn=j)到服务器,并进入SYN_SEND状态,等待服务器确认; 第二次握手:服务器收到syn包,必须确认客户的SYN(ack=j+1),同时自己也发送一个SYN包(syn=k),即SYN+ACK包,此时服务器进入SYN_RECV状态; 第三次握手:客户端收到服务器的SYN+ACK包,向服务器发送确认包ACK(ack=k+1),此包发送完毕,客户端和服务器进入ESTABLISHED状态,完成三次握手。 21.winsock建立连接的主要实现步骤? 答:服务器端:socker()建立套接字,绑定(bind)并监听(listen),用accept()等待客户端连接。 客户端:socker()建立套接字,连接(connect)服务器,连接上后使用send()和recv(),在套接字上写读数据,直至数据交换完毕,closesocket()关闭套接字。 服务器端:accept()发现有客户端连接,建立一个新的套接字,自身重新开始等待连接。该新产生的套接字使用send()和recv()写读数据,直至数据交换完毕,closesocket()关闭套接字。 22.static有什么用途?(请至少说明两种) 1) 在函数体,一个被声明为静态的变量在这一函数被调用过程中维持其值不变。 2) 在模块内(但在函数体外),一个被声明为静态的变量可以被模块内所用函数访问,但不能被模块外其它函数访问。它是一个本地的全局变量。 3) 在模块内,一个被声明为静态的函数只可被这一模块内的其它函数调用。那就是,这个函数被限制在声明它的模块的本地范围内使用 23.引用与指针有什么区别? 1) 引用必须被初始化,指针不必。 2) 引用初始化以后不能被改变,指针可以改变所指的对象。 3) 不存在指向空值的引用,但是存在指向空值的指针。 24、请写出下列代码的输出内容 #include<stdio.h> main() { int a,b,c,d; a=10; b=a++; c=++a; d=10*a++; printf("b,c,d:%d,%d,%d",b,c,d); return 0; } 答:10,12,120 main() { int a[5]={1,2,3,4,5}; int *ptr=(int *)(&a+1); printf("%d,%d",*(a+1),*(ptr-1)); } 输出:2,5 *(a+1)就是a[1],*(ptr-1)就是a[4],执行结果是2,5 &a+1不是首地址+1,系统会认为加一个a数组的偏移,是偏移了一个数组的大小(本例是5个int) int *ptr=(int *)(&a+1); 则ptr实际是&(a[5]),也就是a+5 原因如下: &a是数组指针,其类型为 int (*)[5]; 而指针加1要根据指针类型加上一定的值, 不同类型的指针+1之后增加的大小不同 a是长度为5的int数组指针,所以要加 5*sizeof(int) 所以ptr实际是a[5] 但是prt与(&a+1)类型是不一样的(这点很重要) 所以prt-1只会减去sizeof(int*) a,&a的地址是一样的,但意思不一样,a是数组首地址,也就是a[0]的地址,&a是对象(数组)首地址,a+1是数组下一元素的地址,即a[1],&a+1是下一个对象的地址,即a[5]

2009-05-08

如何 IObjectSafety 标记 ATL 控件安全初始化

需要用来获得所需的功能在步骤涉及到 IObjectSafetyImpl 用作您的控件派生的类之一,和重写 GetInterfaceSafetyOptions 和 SetInterfaceSafetyOptions。 这使您实现所需的功能在这种情况下意味着将标记为可安全编写脚本和初始化该控件。 若要将 IObjectSafetyImpl 需要将其添加到您的控件派生的类的列表。 是例如多边形教程中您看到以下: class ATL_NO_VTABLE CPolyCtl : ... public IObjectSafetyImpl // ATL's version of // IObjectSafety { public: BEGIN_COM_MAP(CPolyCtl) ... COM_INTERFACE_ENTRY_IMPL(IObjectSafety) // Tie IObjectSafety // to this COM map END_COM_MAP() STDMETHOD(GetInterfaceSafetyOptions)(REFIID riid, DWORD *pdwSupportedOptions, DWORD *pdwEnabledOptions) { ATLTRACE(_T("CObjectSafetyImpl::GetInterfaceSafetyOptions\n")); if (!pdwSupportedOptions || !pdwEnabledOptions) return E_FAIL; LPUNKNOWN pUnk; if (_InternalQueryInterface (riid, (void**)&pUnk) == E_NOINTERFACE) { // Our object doesn't even support this interface. return E_NOINTERFACE; }else{ // Cleanup after ourselves. pUnk->Release(); pUnk = NULL; } if (riid == IID_IDispatch) { // IDispatch is an interface used for scripting. If your // control supports other IDispatch or Dual interfaces, you // may decide to add them here as well. Client wants to know // if object is safe for scripting. Only indicate safe for // scripting when the interface is safe. *pdwSupportedOptions = INTERFACESAFE_FOR_UNTRUSTED_CALLER; *pdwEnabledOptions = m_dwSafety & INTERFACESAFE_FOR_UNTRUSTED_CALLER; return S_OK; }else if ((riid == IID_IPersistStreamInit) || (riid == IID_IPersistStorage)) { // IID_IPersistStreamInit and IID_IPersistStorage are // interfaces used for Initialization. If your control // supports other Persistence interfaces, you may decide to // add them here as well. Client wants to know if object is // safe for initializing. Only indicate safe for initializing // when the interface is safe. *pdwSupportedOptions = INTERFACESAFE_FOR_UNTRUSTED_DATA; *pdwEnabledOptions = m_dwSafety & INTERFACESAFE_FOR_UNTRUSTED_DATA; return S_OK; }else{ // We are saying that no other interfaces in this control are // safe for initializing or scripting. *pdwSupportedOptions = 0; *pdwEnabledOptions = 0; return E_FAIL; } } STDMETHOD(SetInterfaceSafetyOptions)(REFIID riid, DWORD dwOptionSetMask, DWORD dwEnabledOptions) { ATLTRACE(_T("CObjectSafetyImpl::SetInterfaceSafetyOptions\n")); if (!dwOptionSetMask && !dwEnabledOptions) return E_FAIL; LPUNKNOWN pUnk; if (_InternalQueryInterface (riid, (void**)&pUnk) == E_NOINTERFACE) { // Our object doesn't even support this interface. return E_NOINTERFACE; }else{ // Cleanup after ourselves. pUnk->Release(); pUnk = NULL; } // Store our current safety level to return in // GetInterfaceSafetyOptions m_dwSafety |= dwEnabledOptions & dwOptionSetMask; if ((riid == IID_IDispatch) && (m_dwSafety & INTERFACESAFE_FOR_UNTRUSTED_CALLER)) { // Client wants us to disable any functionality that would // make the control unsafe for scripting. The same applies to // any other IDispatch or Dual interfaces your control may // support. Because our control is safe for scripting by // default we just return S_OK. return S_OK; }else if (((riid == IID_IPersistStreamInit) || (riid == IID_IPersistStorage)) && (m_dwSafety & INTERFACESAFE_FOR_UNTRUSTED_DATA)) { // Client wants us to make the control safe for initializing // from persistent data. For these interfaces, this control // is safe so we return S_OK. For Any interfaces that are not // safe, we would return E_FAIL. return S_OK; }else{ // This control doesn't allow Initialization or Scripting // from any other interfaces so return E_FAIL. return E_FAIL; } } ... } ATL 3.0 中, IObjectSafetyImpl 的实现已更改,使您现在可以作为模板参数提供安全选项。 例如,上述类的声明将显示为 class ATL_NO_VTABLE CPolyCtl : ... public IObjectSafetyImpl { public: BEGIN_COM_MAP(CPolyCtl) ... ,您将不必重写两个方法。 有关其他信息,单击下面,文章编号,以查看 Microsoft 知识库中相应: 192093 PRB: 编译器错误时移植到 ATL 3.0 IObjectSafetyImpl

2009-04-30

在ATL中如何执行带参数的JavaScript函数

back reading bstr.CopyTo(&dispparams.rgvarg[i].bstrVal); dispparams.rgvarg[i].vt = VT_BSTR; } dispparams.cNamedArgs = 0; EXCEPINFO excepInfo; memset(&excepInfo, 0, sizeof excepInfo); CComVariant vaResult; UINT nArgErr = (UINT)-1; // initialize to invalid arg hr = pScript->Invoke(dispid,IID_NULL,0,DISPATCH_METHOD,&dispparams,&vaResult,&excepInfo,&nArgErr);

2009-04-30

ATL中如何传递SAFEARRAY给VBSscript.pdf

{ CComVariant cv(i+1);//数组:1,2,3 ... size ::SafeArrayPutElement(psa, &i, &cv); } pvarOut->vt = VT_ARRAY | VT_VARIANT; pvarOut->parray = psa; return S_OK; } 为什么实际上是一个简单至极的int数组却要用VT_ARRAY | VT_VARIANT类型呢? 其实这都是为了支持脚本环境,最好是使用VARIANT,OK?

2009-04-30

网面设计样式教程 css20.chm

什么是样式表: CSS 是 Cascading Style Sheet 的缩写。译作「层叠样式表单」。是用于(增强)控制网页样式并允许将样式信息与网页内容分离的一种标记性语言。 如何将样式表加入您的网页: 你可以用以下三种方式将样式表加入您的网页。而最接近目标的样式定义优先权越高。高优先权样式将继承低优先权样式的未重叠定义但覆盖重叠的定义。例外请参阅 !important 声明。 链入外部样式表文件 (Linking to a Style Sheet) 你可以先建立外部样式表文件(.css),然后使用HTML的link对象。示例如下: <head> <title>文档标题</title> <link rel=stylesheet href="http://www.dhtmlet.com/dhtmlet.css" type="text/css"> </head> 而在XML中,你应该如下例所示在声明区中加入: <? xml-stylesheet type="text/css" href="http://www.dhtmlet.com/dhtmlet.css" ?> 定义内部样式块对象 (Embedding a Style Block) 你可以在你的HTML文档的<HTML>和<BODY>标记之间插入一个<STYLE>...</STYLE>块对象。 定义方式请参阅样式表语法。示例如下: <html> <head> <title>文档标题</title> <style type="text/css"> <!-- body {font: 10pt "Arial"} h1 {font: 15pt/17pt "Arial"; font-weight: bold; color: maroon} h2 {font: 13pt/15pt "Arial"; font-weight: bold; color: blue} p {font: 10pt/12pt "Arial"; color: black} --> </style> </head> <body> 请注意,这里将style对象的type属性设置为"text/css",是允许不支持这类型的浏览器忽略样式表单。 内联定义 (Inline Styles) 内联定义即是在对象的标记内使用对象的style属性定义适用其的样式表属性。示例如下: <p style="margin-left: 0.5in; margin-right:0.5in">这一行被增加了左右的外补丁<p> 样式表语法 (CSS Syntax) Selector { property: value } 参数说明: Selector -- 选择符 property : value -- 样式表定义。属性和属性值之间用冒号(:)隔开。多个定义之间用分号(;)隔开 继承的值 (The ' Inherit ' Value) 每个属性都有一个指定的值: Inherit 。它的意思是:将父对象的值等同为计算机值得到。这个值通常仅仅是备用的。显式的声明它可用来强调。

2009-04-29

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除