自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(50)
  • 资源 (13)
  • 收藏
  • 关注

原创 MBI5024驱动程序

MBI5024驱动程序  16-Bit LED显示驱动芯片驱动程序,MBI5024驱动程序,MBI5026驱动程序,MBI5024C语言驱动程序。#includesbit MBI_SDI = P0^1;sbit MBI_CLK = P0^0;sbit MBI_OE = P0^2;sbit MBI_LE = P0^3;voi

2014-11-18 10:20:04 9677

原创 linux mysql图片的存储与提取

#include #include #include #include #include "mysql/mysql.h"//定义mysql变量MYSQL mysql;MYSQL_RES *rs = NULL;MYSQL_ROW row;//连接数据库int mysql_con(){const char *host = "local

2014-10-27 17:34:31 620

转载 centOS下安装mysql

如果要在Linux上做j2ee开发,首先得搭建好j2ee的开发环境,包括了jdk、tomcat、eclipse的安装(这个在之前的一篇随笔中已经有详细讲解了Linux学习之CentOS(七)--CentOS下j2ee环境搭建),如果要开发web项目,我们当然可以安装一个myeclipse到Linux系统上去,这个安装方法和安装eclipse完全相同,就没有记录下来了,有了jdk、tomcat、ec

2014-07-23 19:21:15 544

转载 ubuntu下mysql的安装与配置

因为经常要在ubuntu linux的环境下做一些开发工作。很多时候也牵涉到mysql相关的开发工作。于是就把整个过程做了一个整理,以方便以后再次安装配置的时候参考,也希望能够让新手少走点弯路。    其实当时要做的事情主要也就是以下几件,首先要在ubuntu的机器上装一个mysql server,然后需要配置特定的帐号和权限来执行一些sql脚本,创建一个包含有完整数据的环境。现在开始:

2014-07-23 18:01:04 403

原创 ubuntu下Rhythmbox乱码解决方案

命令:在终端下命令

2014-05-10 22:52:22 385

原创 linux下WPS 的安装

linux下WPS的安装:(sudo权限)1.下载wps-office_8.1.0.3724~b1p2_i386.deb和wps_symbol_fonts.zip2.安装wps-office_8.1.0.3724~b1p2_i386.deb    dpkg -i wps-office_8.1.0.3724~b1p2_i386.deb3.将wps_symbol_fonts.zip解压

2013-11-06 09:23:13 1165

原创 一个字符里有多少个1

一个字符里有多少个1

2013-09-23 14:52:19 661

原创 物流追踪 - -GPS和GPRS应用

linux多线程与GPS,GPRS的应用

2013-09-16 18:39:15 1507

原创 短信管理系统 (linux c)

简    介:     本次课程设计主要是模拟手机短信功能,通过文件操作,链表操作实现对信息的管理功能。源码:#include#includetypedef struct _TIME_{    int month;    int day;    int year;}TIME;typedef struct _MESSAGE_{    c

2013-09-16 18:25:18 869

转载 linux必学的60个命令

inux必学的60个命令Linux提供了大量的命令,利用它可以有效地完成大量的工作,如磁盘操作、文件存取、目录操作、进程管理、文件权限设定等。所以,在Linux系统上工作离不开使用系统提供的命令。要想真正理解Linux系统,就必须从Linux命令学起,通过基础的命令学习可以进一步理解Linux系统。不同Linux发行版的命令数量不一样,但Linux发行版本最少的命令也有200多个。

2013-09-12 18:15:09 1506

转载 10个你也许不知道的Ubuntu技巧

作者近期刚开始与他的合作者合著一本关于Ubuntu技巧的书,目前定名为《Ubuntu功夫》。不过这本书可能要到明年才会出版,作者先给出了以下10条技巧。这些技巧并不一定仅仅是在Ubuntu上才可以使用,只是在Ubuntu这种Linux发行版下经过验证的。1.打开超级用户权限的运行程序对话框 你也许已经知道用Alt+F2来打开”运行程序”对话框,然后可以输入任何命令行运行之。如果你在终端

2013-09-12 11:07:08 470

转载 GDB中应该知道的几个调试方法

七、八年前写过一篇《用GDB调试程序》,于是,从那以后,很多朋友在MSN上以及给我发邮件询问我关于GDB的问题,一直到今天,还有人在问GDB的相关问题。这么多年来,有一些问题是大家反复在问的,一方面,我觉得我以前的文章可能没有说清楚,另一方面,我觉得大家常问的问题正是最有用的,所以,在这里罗列出来。希望大家补充。一、多线程调试多线程调试可能是问得最多的。其实,重要就是下面几个命令:in

2013-09-12 11:01:36 433

转载 美化代码的15个代码语法高亮工具

11.04虚拟机新浪微博桌面环境ubuntu摘要:Ubuntu 11.04在4月底正式发布了,新版本首次采用Unity界面,很多操作和GNOME都不相同,也许你在使用过程中会出现手足无措,不过新鲜事物总是要花费一段时间适应,之后你会发现Ubuntu 11.04还是很Ubuntu 11.04在4月底正式发布了,新版本首次采用Unity界面,很多操作和GNOME都不相同,也许你在使用过程中

2013-09-12 11:00:05 833

转载 软件类职位总结--系统类、安全类、维护类

将这三类放在一起,一方面是我在这方面工作经验比较少,只是平时在工作中接触,另一方面我觉得在这几类职位的工作内容是很多职位都需要掌握的技术只是这几类职位要求的技术更精,承担的工作量更多,比如系统管理员每天都会监控服务器,会将给加入团队的同事建帐号授权,将离开的旧同事帐号注销。现在我们来一一介绍这几类职位,最后的重要的原因是这类职位与软件职位有关系但与软件职业有些距离。    网络工程师 

2013-09-10 11:56:20 639

转载 软件业职位总结--测试类

测试技术经过这么多年的发展,在大学已经有软件测试的专业,在很多年前就有软件测试研究方向。我读硕士研究生时的研究方向就是网络协议的一致性测试。在这里只是介绍测试职位在实际工作中的具体工作是什么。一个测试工程师的工作大致上是在完全理解软件的业务需求后根据每个功能点和它的分类;编写功能测试例,将测试例分组归类成测试套件。测试例是测试文档中最基础的组成部门,测试工程师根据测试例去测试软件,测试的软件是在经

2013-09-10 11:55:40 707 1

转载 软件业职位总结--项目管理类

每当公司接下一个单子,为了能够按时保质的完成合同的交付物,老板就会将这样重要的任务交给项目经理,这是一个要求综合素质的职位,既要懂技术又要懂管理还得性格适合。从这三方面我们逐一了解一下项目经理的工作。为什么会有项目经理这样的职位呢,很简单一个项目总得由一个人来计划调度实施,如果是个小公司那么项目经理就是老板,但是到了一定规模的软件公司老板哪有这么大的精力同时管理多个项目,所以他就将项目分配给合适的

2013-09-10 11:55:38 643

翻译 软件业职位总结--销售类

目前软件界职位分类:    销售类:售前工程师、售后工程师、系统集成工程师;    测试类:测试工程师,产品测试经理;    项目管理类:项目经理,QA工程师;    系统类:网络工程师,基础设施工程师;    安全类:安全系统管理员,网络安全管理员,安全开发工程师;    维护类:数据库管理员,系统管理员,系统运行维护管理员    开发类:需求分析师、开发工程师、

2013-09-10 11:53:37 1021

转载 作为程序员,要取得非凡成就需要记住的15件事

1.走一条不一样的路  在有利于自己的市场中竞争,如果你满足于"泯然众人矣",那恐怕就得跟那些低工资国家的程序员们同场竞技了.  2.了解自己的公司  以我在医院、咨询公司、物流企业以及大技术公司工作的经验来看,这一点所言不虚.  不同公司的运营模式差异极大.如果你理解企业的运营模式,那你就不一样了!在这家公司中(或者对客户而言),你是参与业务运营的资产,你的工作能直接产生效益!

2013-09-10 11:42:01 957

原创 字符数组排序(用指针数组实现)

#include#include#includeint main(){   void sort(char *name[],int n);   void printf(char *name[],int n);   char *name[]={"faa","hbb","acc","cdd","eee"};   int n=5;   s

2013-08-02 15:50:18 777

转载 位操作基础篇之位操作全面总结

KeyWord:   C/C++ 位操作 位操作技巧 判断奇偶 交换两数 变换符号 求绝对值 位操作压缩空间 筛素数 位操作趣味应用 位操作笔试面试位操作篇共分为基础篇和提高篇,基础篇主要对位操作进行全面总结,帮助大家梳理知识。提高篇则针对各大IT公司如微软、腾讯、百度、360等公司的笔试面试题作详细的解答,使大家能熟练应对在笔试面试中位操作题目。      下面就先来对位操作作个

2013-05-01 17:00:10 445

转载 c++中指针和引用的区别

2011-10-31 15:26 1026人阅读 评论(2)收藏 举报目录(?)[-]指针和引用的区别特别之处const指针和引用的实现指针传递和引用传递下面用通俗易懂的话来概述一下:指针-对于一个类型T,T*就是指向T的指针类型,也即一个T*类型的变量能够保存一个T对象的地址,而类型T是可以加一些限定词的,如const、volatile等等。见

2012-09-15 00:08:59 726

原创 C++内部封装实例

#include#includeusing namespace std;class AutoNewDel{private:    char * m_szBuf;    int * m_count;    unsigned int m_nSize;public:    AutoNewDel(unsigned int n=1)    //在构造函数中申请内存

2012-09-14 23:42:49 378

原创 const成员变量与成员函数

coonst修饰的成员变量,必须进行初始化,且不能更新。类中声明const成员变量,只能通过初始化列表的方式生成构造函数对成员变量进行初始化。#includeusing namespace std;class Ext{public:void print();Ext(int i);const int &r;private:const int a;sta

2012-09-13 23:44:47 349

原创 const在函数中的作用

1.修饰参数,说明在函数中是不能修改它的值   2.修饰返回值,说明函数的返回值是不能被修改的   3.修饰类成员函数体实例:#include#include using namespace std;//const输入修饰输入参数char StringCopy(char *strDestination, const char *strSource){

2012-09-13 23:42:02 470

原创 STL组件1

STL组件   STL提供三种类型的组件:容器,迭代器和算法,他们都支持泛型程序设计标准。   容器主要有两类:顺序容器和关联容器(set,multiset,map,multimap)包含查找元素的键值。   迭代器的作用是遍历容器。   STL算法库包含四类算法:排序算法,不可变序算法,变序性算法和数值算法。vector向量容器  vector向量容器不但能像数组

2012-07-31 15:37:19 273

转载 STL组件

该篇分为十一部分,分别是:vector类的主要成员、deque类的主要成员、list类的主要成员、   stack类的主要成员、queue类的主要成员、priority_queue类的组要成员、set类的主要成员、multiset类的主要成员、map类的主要成员、multimap类的主要成员、STL算法函数(一)vector类的主要成员vector是可变长的向量,比较灵活

2012-07-31 09:49:40 450

转载 MVC常见问题小总结

在MVC中项目中使用JQuery,$.Post方法提交数据时产生中文乱码现象?      解决方法:在$.post时进行数据编码,使用escape方法$.post("@Url.Action("AddFriendLink" , "Setup")" ,{"Name" :escape(name)},function(data){if(data>0){

2012-07-12 09:40:41 685

转载 页面片段缓存(二)

在上一篇文章中,我介绍了我们用土法炼钢的方法,使用Velocity提供的自定义标签实现片段缓存。这样的方式虽然也解决了我们的问题,但还是引出了一些bug。而且还有点hack的味道(虽然我喜欢hack)。实际上对于片段缓存,业界有成熟的解决方案,还有一个所谓的W3C标准:ESI(Edge Side Include) 。  ESI本身没有什么,只是一个XML的标签集合。ESI和SSI(Se

2012-07-12 09:37:28 727

转载 页面片段缓存(一)

一般,页面上会分为很多部分,而不同的部分更新的频率是不一样的。如果对整个页面采用统一的缓存策略则不太合适,  而且很多系统的页面左上角都有一个该死的“Welcome XXX”。这种特定于用户的信息我们是不能缓存的。对于这些情况我们就需要使用片段缓存了。对页面不同的部分(片段)施加不同的缓存策略,而要使用片段缓存,首先就得对页面进行切分。土一点的办法可以用iframe,用iframe将页面划

2012-07-12 09:36:49 718

转载 ASP.NET MVC 入门介绍 (下)

接上文,我们来完善验证功能。在System.ComponentModel.DataAnnotations命名空间中,已经有了一些基本的属性类来实现验证功能,只要把这些属性加到Model的字段上就可以了。具体的属性类可以查MSDN, 下面给出一个例子:public class Movie{ [Key,DatabaseGenerated(DatabaseGeneratedOption

2012-07-12 09:35:40 449

转载 ASP.NET MVC 入门介绍 (上)

参考文章 ASP.NET MVC Overview.  1. MVC模式  MVC模式是一种软件架构模式。它把软件系统分为三个部分:模型(Model),视图(View)和控制器(Controller)。MVC模式最早由Trygve Reenskaug在1974年提出,是施乐帕罗奥多研究中心(Xerox PARC)在20世纪80年代为程序语言Smalltalk发明的一种软

2012-07-12 09:34:36 638

转载 Asp.Net MVC3.0【MVC项目实战の五】

随着我们购物车的不断完善,我们简单的完成到最后的订单模块。我们需要一个扩展的模型,在我们的域模型类库里,添加一个类(ShippingDetail)类,它的具体代码如下:using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ComponentMo

2012-07-12 09:29:03 1534

转载 Asp.Net MVC 3.0【MVC项目实战の四】

接下来不啰嗦直接搞购物车。首先我们需要一个购物车的实体。定义购物车实体我们需要一个购物车实体来的模型域(Domain),因为购物车是构成我们应用程序的 业务领域。接下我们要创建购物车实体领域(Domain),在们的域模型(Domian)项目"SportsStore.Domain"的Entities文件下创建我们的购物车实体域模型,如下图1.图1.我们的Cart.cs(购物车

2012-07-12 09:27:58 3903

转载 Asp.Net Mvc 3.0【MVC项目实战の三】

我的项目只不过构建了大体的样子,接下来我们需要完成导航部分购物车部分,订单部分等。只有这些模块搞完,我们的购物流程项目才算大体的搞完。接下来,就从我们的导航开始吧!添加导航如果在我们的项目应用导航展示给用户,我们应该做一下的事情:加强我们的模型(ProductsListViewModel),加强之后的模型必须过滤商品的属性。重构我们的URL,修改我们路由机制。创建类别列表,

2012-07-12 09:26:42 2100

转载 Asp.Net MVC 3.0【MVC实战项目の二】

我们已经可以显示简单的视图,但是我们仍然是模拟IProductRepository实现返回的是一些测试数据,这个时候我们就需要相应的数据库来存储我们项目相关的东西,所以我们需要创建数据库。我们将使用SQL Server作为数据库,我们将访问数据库使用的实体框架(EF)EntityFramework,这是.Net ORM框架。(ORM框架:称"对象关系映射",ORM 主要是把数据库中的关系数据映射称

2012-07-12 09:25:15 2558

转载 Asp.Net MVC 3.0【MVC实战项目の一】

前面几话都讲的一些有关MVC相关东西,从这话开始应用实战的项目开始。实战一个简单的购物流程的项目吧!首先创建一个空白的解决方案,如下图1.图1.我们预计创建3个模块,一个模块包含我们的域模型(DoMain),一个模块包含我的MVC Web应用程序,还有一个单元测试的模块。我们的域模型(DoMain)是一个类库项目,然后是一个Asp.Net MVC3 的Web应用

2012-07-12 09:23:09 1680

转载 我的WCF之旅(1):创建一个简单的WCF程序

为了使读者对基于WCF的编程模型有一个直观的映像,我将带领读者一步一步地创建一个完整的WCF应用。本应用功能虽然简单,但它涵盖了一个完整WCF应用的基本结构。对那些对WCF不是很了解的读者来说,这个例子将带领你正式进入WCF的世界。在这个例子中,我们将实现一个简单的计算服务(CalculatorService),提供基本的加、减、乘、除的运算。和传统的分布式通信框架一样,WCF本质上提供一

2012-07-08 14:51:58 623

转载 WPF窗体

一、窗体类在Visual Studio和Expression Blend中,自定义的窗体均继承System.Windows.Window类(类型化窗体)。定义的窗体由两部分组成:1、XAML文件 1: Window 2: xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"

2012-07-07 09:50:58 2014

转载 ASP.NET MVC 3 Model【通过一简单实例一步一步的介绍】

今天主要讲Model的两个方面:1. ASP.Net MVC 3 Model 简介 通过一简单的事例一步一步的介绍2. ASP.Net MVC 3 Model 的一些验证      MVC 中 Model 主要负责维持数据状态,将数据从数据存储器中检索并传递给控制器,客户端传送过来的数据通过处理后再传回数据存储系统中。是MVC中较为重要的一层。这里为什么说是数据存储器而不是数据

2012-07-06 09:19:10 1120

转载 动态规划解最长子序列问题

动态规划法经常会遇到复杂问题不能简单地分解成几个子问题,而会分解出一系列的子问题。简单地采用把大问题分解成子问题,并综合子问题的解导出大问题的解的方法,问题求解耗时会按问题规模呈幂级数增加。为了节约重复求相同子问题的时间,引入一个数组,不管它们是否对最终解有用,把所有子问题的解存于该数组中,这就是动态规划法所采用的基本方法。【问题】 求两字符序列的最长公共字

2012-05-09 15:53:09 15432

c_mqtt.tgz

c_mqtt

2021-09-10

MBI5024 LED驱动IC

MBI5024是专为LED显示面板设计的驱动IC,使用者可以精确的控制LED的发光亮度

2014-11-18

ATmega32中文资料.pdf

• 高性能、低功耗的 8 位AVR® 微处理器 • 先进的RISC 结构 – 131 条指令 – 大多数指令执行时间为单个时钟周期 – 32个8 位通用工作寄存器 – 全静态工作 – 工作于16 MHz 时性能高达16 MIPS – 只需两个时钟周期的硬件乘法器 • 非易失性程序和数据存储器 – 32K 字节的系统内可编程Flash 擦写寿命: 10,000 次 – 具有独立锁定位的可选Boot 代码区 通过片上Boot 程序实现系统内编程 真正的同时读写操作 – 1024 字节的EEPROM 擦写寿命: 100,000 次 – 2K字节片内SRAM – 可以对锁定位进行编程以实现用户程序的加密 • JTAG 接口( 与IEEE 1149.1 标准兼容) – 符合JTAG 标准的边界扫描功能 – 支持扩展的片内调试功能 – 通过JTAG 接口实现对Flash、EEPROM、熔丝位和锁定位的编程 • 外设特点 – 两个具有独立预分频器和比较器功能的8 位定时器/ 计数器 – 一个具有预分频器、比较功能和捕捉功能的16 位定时器/ 计数器 – 具有独立振荡器的实时计数器RTC – 四通道PWM – 8路10 位ADC 8 个单端通道 TQFP 封装的7 个差分通道 2 个具有可编程增益(1x, 10x, 或200x)的差分通道 – 面向字节的两线接口 – 可编程的串行USART – 可工作于主机/ 从机模式的SPI 串行接口 – 具有独立片内振荡器的可编程看门狗定时器 – 片内模拟比较器 • 特殊的处理器特点 – 上电复位以及可编程的掉电检测 – 片内经过标定的RC 振荡器 – 片内/ 片外中断源 – 6种睡眠模式: 空闲模式、ADC 噪声抑制模式、省电模式、掉电模式、Standby 模式以及 扩展的Standby 模式 • I/O 和封装 – 32 个可编程的I/O 口 – 40引脚PDIP 封装, 44 引脚TQFP 封装, 与44 引脚MLF 封装 • 工作电压 – ATmega32L:2.7 - 5.5V – ATmega32:4.5 - 5.5V • 速度等级 – ATmega32L:0 - 8 MHz – ATmega32:0 - 16 MHz • ATmega32L 在1 MHz, 3V, 25°C 时的功耗 – 正常模式: 1.1 mA – 空闲模式: 0.35 mA – 掉电模式: < 1 μA

2014-09-03

Zigbee协议栈中文说明.pdf

ZigBee堆栈是在IEEE 802.15.4标准基础上建立的,定义了协议的MAC和PHY层。 ZigBee设备应该包括IEEE802.15.4(该标准定义了RF 射频以及与相邻设备之间的通信)的 PHY和MAC 层,以及ZigBee堆栈层:网络层(NWK)、应用层和安全服务提供层。图1-1 给出了这些组件的概况。

2014-09-03

USR-WIFI232-X-V4.2.pdf

有人科技——USR-WIFI232-X-V4.2使用说明。 本模块适用于USR-WIFI232-A/B/C/D 及其衍生产品,如USR-WIFI232-2/600/62E。

2014-09-03

同类多传感器自适应加权估计的数据级融合算法研究.pdf

针对同类多传感器测量中含有的噪声,提出了多传感器数据自适应加权融合估计算法,该算法不要求知道传 感器测量数据的任何先验知识,依据估计的各传感器的方差的变化,及时调整参与融合的各传感器的权系数,使融 合系统的均方误差始终最小,并在理论上证明了该估计算法的线性无偏最小方差性. 仿真结果表明了本算法的有 效性,其融合结果在精度、容错性方面均优于传统的平均值估计算法.

2014-09-03

模拟QQ聊天器

UDP,linux多线程 模拟QQ聊天器

2013-09-23

物流追踪(实时消息发送)

通过GPS获取信息,然后由GPRS接受,发送信息。通过解析收到的信息,给特定号码回复特定信息。通过信号实现定时发送信息。

2013-09-17

短信管理系统

linux c 短信管理系统,通过文件和链表操作模拟实现短信管理。

2013-09-17

3dmax大讲堂 角色设计

讲述3d角色建模和角色动作设计 角色动画全程实例学习

2012-05-05

杭电acm模版

杭电acm模版,可以更好地理解一些算法 ACM模板 JPVision Fighting! To be or not to be , that is a question. alpc48

2012-05-05

java播放器

java版MPEG播放器 import java.io.*; import java.net.*; import java.awt.*; import java.awt.image.*; import java.applet.*; /** * This class represents a buffered input stream which can read * variable length codes from MPEG-1 video streams. */ class BitInputStream { /** * MPEG video layers start codes */ public final static int SYNC_START_CODE = 0x000001; public final static int PIC_START_CODE = 0x00000100; public final static int SLICE_MIN_CODE = 0x00000101; public final static int SLICE_MAX_CODE = 0x000001af; public final static int USER_START_CODE = 0x000001b2; public final static int SEQ_START_CODE = 0x000001b3; public final static int EXT_START_CODE = 0x000001b5; public final static int SEQ_END_CODE = 0x000001b7; public final static int GOP_START_CODE = 0x000001b8; /** * The underlying input stream */ private InputStream stream; /** * The bit buffer variables */ private int bitbuffer, bitcount; /** * The 32 bit buffer variables */ private int buffer[], count, position; /** * Initializes the bit input stream object */ public BitInputStream(InputStream inputStream) { stream = inputStream; buffer = new int[1024]; bitbuffer = bitcount = 0; count = position = 0; } /** * Reads the next MPEG-1 layer start code */ public int getCode() throws IOException { alignBits(8); while (showBits(24) != SYNC_START_CODE) flushBits(8); return getBits(32); } /** * Shows the next MPEG-1 layer start code */ public int showCode() throws IOException { alignBits(8); while (showBits(24) != SYNC_START_CODE) flushBits(8); return showBits(32); } /** * Reads the next variable length code */ public int getBits(int nbits) throws IOException { int bits; if (nbits <= bitcount) { bits = bitbuffer >>> (32 - nbits); bitbuffer <<= nbits; bitcount -= nbits; } else { bits = bitbuffer >>> (32 - nbits); nbits -= bitcount; bitbuffer = get32Bits(); bits |= bitbuffer >>> (32 - nbits); bitbuffer <<= nbits; bitcount = 32 - nbits; } if (nbits >= 32) bitbuffer = 0; return bits; } /** * Shows the next variable length code */ public int showBits(int nbits) throws IOException { int bits = bitbuffer >>> (32 - nbits); if (nbits > bitcount) { bits |= show32Bits() >>> (32 + bitcount - nbits); } return bits; } /** * Flushes the current variable length code */ public void flushBits(int nbits) throws IOException { if (nbits <= bitcount) { bitbuffer <<= nbits; bitcount -= nbits; } else { nbits -= bitcount; bitbuffer = get32Bits() << nbits; bitcount = 32 - nbits; } } /** * Aligns the input stream pointer to a given boundary */ public void alignBits(int nbits) throws IOException { flushBits(bitcount % nbits); } /** * Reads the next 32-bit code from the buffered stream */ private int get32Bits() throws IOException { if (position >= count) { position = 0; for (count = 0; count < buffer.length; count++) buffer[count] = read32Bits(); } return buffer[position++]; } /** * Shows the next 32-bit code from the buffered stream */ private int show32Bits() throws IOException { if (position >= count) { position = 0; for (count = 0; count < buffer.length; count++) buffer[count] = read32Bits(); } return buffer[position]; } /** * Reads 32-bit big endian codes from the stream */ private int read32Bits() throws IOException { if (stream.available() <= 0) return SEQ_END_CODE; int a0 = stream.read() & 0xff; int a1 = stream.read() & 0xff; int a2 = stream.read() & 0xff; int a3 = stream.read() & 0xff; return (a0 << 24) + (a1 << 16) + (a2 << 8) + (a3 << 0); } } /** * Huffman VLC entropy decoder for MPEG-1 video streams. The tables * are from ISO/IEC 13818-2 DIS, Annex B, variable length code tables. */ class VLCInputStream extends BitInputStream { /** * Table B-1, variable length codes for macroblock address increments */ private final static byte MBAtable[][] = { // 00000011xxx { 33,11 }, { 32,11 }, { 31,11 }, { 30,11 }, { 29,11 }, { 28,11 }, { 27,11 }, { 26,11 }, // 0000010xxxx { 25,11 }, { 24,11 }, { 23,11 }, { 22,11 }, { 21,10 }, { 21,10 }, { 20,10 }, { 20,10 }, { 19,10 }, { 19,10 }, { 18,10 }, { 18,10 }, { 17,10 }, { 17,10 }, { 16,10 }, { 16,10 }, // 0000xxxx... { 0, 0 }, { 0, 0 }, { 0, 0 }, { 33,11 }, { 25,11 }, { 19,10 }, { 15, 8 }, { 14, 8 }, { 13, 8 }, { 12, 8 }, { 11, 8 }, { 10, 8 }, { 9, 7 }, { 9, 7 }, { 8, 7 }, { 8, 7 }, // 00xxx...... { 0, 0 }, { 13, 8 }, { 7, 5 }, { 6, 5 }, { 5, 4 }, { 5, 4 }, { 4, 4 }, { 4, 4 }, // xxx........ { 0, 0 }, { 5, 4 }, { 3, 3 }, { 2, 3 }, { 1, 1 }, { 1, 1 }, { 1, 1 }, { 1, 1 } }; /** * Table B-2, variable length codes for I-picture macroblock types */ private final static byte IMBtable[][] = { // xx { 0, 0 }, { 17,2 }, { 1,1 }, { 1,1 } }; /** * Table B-3, variable length codes for P-picture macroblock types */ private final static byte PMBtable[][] = { // 000xxx { 0,0 }, { 17,6 }, { 18,5 }, { 18,5 }, { 26,5 }, { 26,5 }, { 1,5 }, { 1,5 }, // xxx... { 0,0 }, { 8,3 }, { 2,2 }, { 2,2 }, { 10,1 }, { 10,1 }, { 10,1 }, { 10,1 } }; /** * Table B-4, variable length codes for B-picture macroblock types */ private final static byte BMBtable[][] = { // 00xxxx { 0,0 }, { 17,6 }, { 22,6 }, { 26,6 }, { 30,5 }, { 30,5 }, { 1,5 }, { 1,5 }, { 8,4 }, { 8,4 }, { 8,4 }, { 8,4 }, { 10,4 }, { 10,4 }, { 10,4 }, { 10,4 }, // xxx... { 0,0 }, { 8,4 }, { 4,3 }, { 6,3 }, { 12,2 }, { 12,2 }, { 14,2 }, { 14,2 }, }; /** * Table B-9, variable length codes for coded block patterns */ private final static byte CBPtable[][] = { // 000000xxx { 0,0 }, { 0,9 }, { 39,9 }, { 27,9 }, { 59,9 }, { 55,9 }, { 47,9 }, { 31,9 }, // 000xxxxx. { 0,0 }, { 39,9 }, { 59,9 }, { 47,9 }, { 58,8 }, { 54,8 }, { 46,8 }, { 30,8 }, { 57,8 }, { 53,8 }, { 45,8 }, { 29,8 }, { 38,8 }, { 26,8 }, { 37,8 }, { 25,8 }, { 43,8 }, { 23,8 }, { 51,8 }, { 15,8 }, { 42,8 }, { 22,8 }, { 50,8 }, { 14,8 }, { 41,8 }, { 21,8 }, { 49,8 }, { 13,8 }, { 35,8 }, { 19,8 }, { 11,8 }, { 7,8 }, // 001xxxx.. { 34,7 }, { 18,7 }, { 10,7 }, { 6,7 }, { 33,7 }, { 17,7 }, { 9,7 }, { 5,7 }, { 63,6 }, { 63,6 }, { 3,6 }, { 3,6 }, { 36,6 }, { 36,6 }, { 24,6 }, { 24,6 }, // xxxxx.... { 0,0 }, { 57,8 }, { 43,8 }, { 41,8 }, { 34,7 }, { 33,7 }, { 63,6 }, { 36,6 }, { 62,5 }, { 2,5 }, { 61,5 }, { 1,5 }, { 56,5 }, { 52,5 }, { 44,5 }, { 28,5 }, { 40,5 }, { 20,5 }, { 48,5 }, { 12,5 }, { 32,4 }, { 32,4 }, { 16,4 }, { 16,4 }, { 8,4 }, { 8,4 }, { 4,4 }, { 4,4 }, { 60,3 }, { 60,3 }, { 60,3 }, { 60,3 } }; /** * Table B-10, variable length codes for motion vector codes */ private final static byte MVtable[][] = { // 00000011xx { 16,10 }, { 15,10 }, { 14,10 }, { 13,10 }, // 0000010xxx { 12,10 }, { 11,10 }, { 10, 9 }, { 10, 9 }, { 9, 9 }, { 9, 9 }, { 8, 9 }, { 8, 9 }, // 000xxxx... { 0, 0 }, { 0, 0 }, { 12,10 }, { 7, 7 }, { 6, 7 }, { 5, 7 }, { 4, 6 }, { 4, 6 }, { 3, 4 }, { 3, 4 }, { 3, 4 }, { 3, 4 }, { 3, 4 }, { 3, 4 }, { 3, 4 }, { 3, 4 }, // xxx....... { 0, 0 }, { 2, 3 }, { 1, 2 }, { 1, 2 }, { 0, 0 }, { 0, 0 }, { 0, 0 }, { 0, 0 } }; /** * Table B-12, variable length codes for DC luminance sizes */ private final static byte DClumtable[][] = { // xxx...... { 1,2 }, { 1,2 }, { 2,2 }, { 2,2 }, { 0,3 }, { 3,3 }, { 4,3 }, { 5,4 }, // 111xxx... { 5,4 }, { 5,4 }, { 5,4 }, { 5,4 }, { 6,5 }, { 6,5 }, { 7,6 }, { 8,7 }, // 111111xxx { 8,7 }, { 8,7 }, { 8,7 }, { 8,7 }, { 9,8 }, { 9,8 }, { 10,9 }, { 11,9 } }; /** * Table B-13, variable length codes for DC chrominance sizes */ private final static byte DCchrtable[][] = { // xxxx...... { 0,2 }, { 0,2 }, { 0,2 }, { 0,2 }, { 1,2 }, { 1,2 }, { 1,2 }, { 1,2 }, { 2,2 }, { 2,2 }, { 2,2 }, { 2,2 }, { 3,3 }, { 3,3 }, { 4,4 }, { 5,5 }, // 1111xxx... { 5,5 }, { 5,5 }, { 5,5 }, { 5,5 }, { 6,6 }, { 6,6 }, { 7,7 }, { 8,8 }, // 1111111xxx { 8,8 }, { 8,8 }, { 8,8 }, { 8,8 }, { 9,9 }, { 9,9 }, { 10,10 }, { 11,10 } }; public final static short EOB = 64; public final static short ESC = 65; /** * Table B-14, variable length codes for DCT coefficients */ private final static short DCTtable[][] = { // 000000000001xxxx { 4609,16 }, { 4353,16 }, { 4097,16 }, { 3841,16 }, { 774,16 }, { 528,16 }, { 527,16 }, { 526,16 }, { 525,16 }, { 524,16 }, { 523,16 }, { 287,16 }, { 286,16 }, { 285,16 }, { 284,16 }, { 283,16 }, // 00000000001xxxx. {10240,15 }, { 9984,15 }, { 9728,15 }, { 9472,15 }, { 9216,15 }, { 8960,15 }, { 8704,15 }, { 8448,15 }, { 8192,15 }, { 3585,15 }, { 3329,15 }, { 3073,15 }, { 2817,15 }, { 2561,15 }, { 2305,15 }, { 2049,15 }, // 0000000001xxxx.. { 7936,14 }, { 7680,14 }, { 7424,14 }, { 7168,14 }, { 6912,14 }, { 6656,14 }, { 6400,14 }, { 6144,14 }, { 5888,14 }, { 5632,14 }, { 5376,14 }, { 5120,14 }, { 4864,14 }, { 4608,14 }, { 4352,14 }, { 4096,14 }, // 000000001xxxx... { 522,13 }, { 521,13 }, { 773,13 }, { 1027,13 }, { 1282,13 }, { 1793,13 }, { 1537,13 }, { 3840,13 }, { 3584,13 }, { 3328,13 }, { 3072,13 }, { 282,13 }, { 281,13 }, { 280,13 }, { 279,13 }, { 278,13 }, // 00000001xxxx.... { 2816,12 }, { 520,12 }, { 772,12 }, { 2560,12 }, { 1026,12 }, { 519,12 }, { 277,12 }, { 276,12 }, { 2304,12 }, { 275,12 }, { 274,12 }, { 1281,12 }, { 771,12 }, { 2048,12 }, { 518,12 }, { 273,12 }, // 0000001xxx...... { 272,10 }, { 517,10 }, { 1792,10 }, { 770,10 }, { 1025,10 }, { 271,10 }, { 270,10 }, { 516,10 }, // 000xxxx......... { 0, 0 }, { 272,10 }, { ESC, 6 }, { ESC, 6 }, { 514, 7 }, { 265, 7 }, { 1024, 7 }, { 264, 7 }, { 263, 6 }, { 263, 6 }, { 262, 6 }, { 262, 6 }, { 513, 6 }, { 513, 6 }, { 261, 6 }, { 261, 6 }, // 00100xxx........ { 269, 8 }, { 1536, 8 }, { 268, 8 }, { 267, 8 }, { 515, 8 }, { 769, 8 }, { 1280, 8 }, { 266, 8 }, // xxxxx........... { 0, 0 }, { 514, 7 }, { 263, 6 }, { 513, 6 }, { 269, 8 }, { 768, 5 }, { 260, 5 }, { 259, 5 }, { 512, 4 }, { 512, 4 }, { 258, 4 }, { 258, 4 }, { 257, 3 }, { 257, 3 }, { 257, 3 }, { 257, 3 }, { EOB, 2 }, { EOB, 2 }, { EOB, 2 }, { EOB, 2 }, { EOB, 2 }, { EOB, 2 }, { EOB, 2 }, { EOB, 2 }, { 256, 2 }, { 256, 2 }, { 256, 2 }, { 256, 2 }, { 256, 2 }, { 256, 2 }, { 256, 2 }, { 256, 2 } }; /** * Storage for RLE run and level of DCT block coefficients */ private int data[]; /** * Initializes the Huffman entropy decoder for MPEG-1 streams */ public VLCInputStream(InputStream inputStream) { super(inputStream); data = new int[2]; } /** * Returns macroblock address increment codes */ public int getMBACode() throws IOException { int code, value = 0; /* skip macroblock escape */ while ((code = showBits(11)) == 15) { flushBits(11); } /* decode macroblock skip codes */ while ((code = showBits(11)) == 8) { flushBits(11); value += 33; } /* decode macroblock increment */ if (code >= 512) code = (code >> 8) + 48; else if (code >= 128) code = (code >> 6) + 40; else if (code >= 48) code = (code >> 3) + 24; else if (code >= 24) code -= 24; else throw new IOException("Invalid macro block address increment"); flushBits(MBAtable[code][1]); return value + MBAtable[code][0]; } /** * Returns I-picture macroblock type flags */ public int getIMBCode() throws IOException { int code = showBits(2); if (code <= 0) throw new IOException("Invalid I-picture macro block code"); flushBits(IMBtable[code][1]); return IMBtable[code][0]; } /** * Returns P-picture macroblock type flags */ public int getPMBCode() throws IOException { int code = showBits(6); if (code >= 8) code = (code >> 3) + 8; else if (code <= 0) throw new IOException("Invalid P-picture macro block code"); flushBits(PMBtable[code][1]); return PMBtable[code][0]; } /** * Returns B-picture macroblock type flags */ public int getBMBCode() throws IOException { int code = showBits(6); if (code >= 16) code = (code >> 3) + 16; else if (code <= 0) throw new IOException("Invalid B-picture macro block code"); flushBits(BMBtable[code][1]); return BMBtable[code][0]; } /** * Returns coded block pattern flags */ public int getCBPCode() throws IOException { int code = showBits(9); if (code >= 128) code = (code >> 4) + 56; else if (code >= 64) code = (code >> 2) + 24; else if (code >= 8) code = (code >> 1) + 8; else if (code <= 0) throw new IOException("Invalid block pattern code"); flushBits(CBPtable[code][1]); return CBPtable[code][0]; } /** * Returns motion vector codes */ public int getMVCode() throws IOException { int code = showBits(10); if (code >= 128) code = (code >> 7) + 28; else if (code >= 24) code = (code >> 3) + 12; else if (code >= 12) code -= 12; else throw new IOException("Invalid motion vector code"); flushBits(MVtable[code][1]); code = MVtable[code][0]; return (getBits(1) == 0 ? code : -code); } /** * Returns intra coded DC luminance coefficients */ public int getIntraDCLumValue() throws IOException { int code = showBits(9); if (code >= 504) code -= 488; else if (code >= 448) code = (code >> 3) - 48; else code >>= 6; flushBits(DClumtable[code][1]); int nbits = DClumtable[code][0]; if (nbits != 0) { code = getBits(nbits); if ((code & (1 << (nbits - 1))) == 0) code -= (1 << nbits) - 1; return code; } return 0; } /** * Returns intra coded DC chrominance coefficients */ public int getIntraDCChromValue() throws IOException { int code = showBits(10); if (code >= 1016) code -= 992; else if (code >= 960) code = (code >> 3) - 104; else code >>= 6; flushBits(DCchrtable[code][1]); int nbits = DCchrtable[code][0]; if (nbits != 0) { code = getBits(nbits); if ((code & (1 << (nbits - 1))) == 0) code -= (1 << nbits) - 1; return code; } return 0; } /** * Returns inter coded DC luminance or chrominance coefficients */ public int[] getInterDCValue() throws IOException { /* handle special variable length code */ if (showBits(1) != 0) { data[0] = 0; data[1] = getBits(2) == 2 ? 1 : -1; return data; } return getACValue(); } /** * Returns AC luminance or chrominance coefficients */ public int[] getACValue() throws IOException { int code = showBits(16); if (code >= 10240) code = (code >> 11) + 112; else if (code >= 8192) code = (code >> 8) + 72; else if (code >= 1024) code = (code >> 9) + 88; else if (code >= 512) code = (code >> 6) + 72; else if (code >= 256) code = (code >> 4) + 48; else if (code >= 128) code = (code >> 3) + 32; else if (code >= 64) code = (code >> 2) + 16; else if (code >= 32) code >>= 1; else if (code >= 16) code -= 16; else throw new IOException("Invalid DCT coefficient code"); flushBits(DCTtable[code][1]); data[0] = DCTtable[code][0] & 0xFF; data[1] = DCTtable[code][0] >>> 8; if (data[0] == ESC) { data[0] = getBits(6); data[1] = getBits(8); if (data[1] == 0x00) data[1] = getBits(8); else if (data[1] == 0x80) data[1] = getBits(8) - 256; else if (data[1] >= 0x80) data[1] -= 256; } else if (data[0] != EOB) { if (getBits(1) != 0) data[1] = -data[1]; } return data; } } /** * Fast inverse two-dimensional discrete cosine transform algorithm * by Chen-Wang using 32 bit integer arithmetic (8 bit coefficients). */ class IDCT { /** * The basic DCT block is 8x8 samples */ private final static int DCTSIZE = 8; /** * Integer arithmetic precision constants */ private final static int PASS_BITS = 3; private final static int CONST_BITS = 11; /** * Precomputed DCT cosine kernel functions: * Ci = (2^CONST_BITS)*sqrt(2.0)*cos(i * PI / 16.0) */ private final static int C1 = 2841; private final static int C2 = 2676; private final static int C3 = 2408; private final static int C5 = 1609; private final static int C6 = 1108; private final static int C7 = 565; public static void transform(int block[]) { /* pass 1: process rows */ for (int i = 0, offset = 0; i < DCTSIZE; i++, offset += DCTSIZE) { /* get coefficients */ int d0 = block[offset + 0]; int d4 = block[offset + 1]; int d3 = block[offset + 2]; int d7 = block[offset + 3]; int d1 = block[offset + 4]; int d6 = block[offset + 5]; int d2 = block[offset + 6]; int d5 = block[offset + 7]; int d8; /* AC terms all zero? */ if ((d1 | d2 | d3 | d4 | d5 | d6 | d7) == 0) { d0 <<= PASS_BITS; block[offset + 0] = d0; block[offset + 1] = d0; block[offset + 2] = d0; block[offset + 3] = d0; block[offset + 4] = d0; block[offset + 5] = d0; block[offset + 6] = d0; block[offset + 7] = d0; continue; } /* first stage */ d8 = (d4 + d5) * C7; d4 = d8 + d4 * (C1 - C7); d5 = d8 - d5 * (C1 + C7); d8 = (d6 + d7) * C3; d6 = d8 - d6 * (C3 - C5); d7 = d8 - d7 * (C3 + C5); /* second stage */ d8 = ((d0 + d1) << CONST_BITS) + (1 << (CONST_BITS - PASS_BITS - 1)); d0 = ((d0 - d1) << CONST_BITS) + (1 << (CONST_BITS - PASS_BITS - 1)); d1 = (d2 + d3) * C6; d2 = d1 - d2 * (C2 + C6); d3 = d1 + d3 * (C2 - C6); d1 = d4 + d6; d4 = d4 - d6; d6 = d5 + d7; d5 = d5 - d7; /* third stage */ d7 = d8 + d3; d8 = d8 - d3; d3 = d0 + d2; d0 = d0 - d2; d2 = ((d4 + d5) * 181) >> 8; d4 = ((d4 - d5) * 181) >> 8; /* output stage */ block[offset + 0] = (d7 + d1) >> (CONST_BITS - PASS_BITS); block[offset + 7] = (d7 - d1) >> (CONST_BITS - PASS_BITS); block[offset + 1] = (d3 + d2) >> (CONST_BITS - PASS_BITS); block[offset + 6] = (d3 - d2) >> (CONST_BITS - PASS_BITS); block[offset + 2] = (d0 + d4) >> (CONST_BITS - PASS_BITS); block[offset + 5] = (d0 - d4) >> (CONST_BITS - PASS_BITS); block[offset + 3] = (d8 + d6) >> (CONST_BITS - PASS_BITS); block[offset + 4] = (d8 - d6) >> (CONST_BITS - PASS_BITS); } /* pass 2: process columns */ for (int i = 0, offset = 0; i < DCTSIZE; i++, offset++) { /* get coefficients */ int d0 = block[offset + DCTSIZE*0]; int d4 = block[offset + DCTSIZE*1]; int d3 = block[offset + DCTSIZE*2]; int d7 = block[offset + DCTSIZE*3]; int d1 = block[offset + DCTSIZE*4]; int d6 = block[offset + DCTSIZE*5]; int d2 = block[offset + DCTSIZE*6]; int d5 = block[offset + DCTSIZE*7]; int d8; /* AC terms all zero? */ if ((d1 | d2 | d3 | d4 | d5 | d6 | d7) == 0) { d0 >>= PASS_BITS + 3; block[offset + DCTSIZE*0] = d0; block[offset + DCTSIZE*1] = d0; block[offset + DCTSIZE*2] = d0; block[offset + DCTSIZE*3] = d0; block[offset + DCTSIZE*4] = d0; block[offset + DCTSIZE*5] = d0; block[offset + DCTSIZE*6] = d0; block[offset + DCTSIZE*7] = d0; continue; } /* first stage */ d8 = (d4 + d5) * C7; d4 = (d8 + d4 * (C1 - C7)) >> 3; d5 = (d8 - d5 * (C1 + C7)) >> 3; d8 = (d6 + d7) * C3; d6 = (d8 - d6 * (C3 - C5)) >> 3; d7 = (d8 - d7 * (C3 + C5)) >> 3; /* second stage */ d8 = ((d0 + d1) << (CONST_BITS - 3)) + (1 << (CONST_BITS + PASS_BITS-1)); d0 = ((d0 - d1) << (CONST_BITS - 3)) + (1 << (CONST_BITS + PASS_BITS-1)); d1 = (d2 + d3) * C6; d2 = (d1 - d2 * (C2 + C6)) >> 3; d3 = (d1 + d3 * (C2 - C6)) >> 3; d1 = d4 + d6; d4 = d4 - d6; d6 = d5 + d7; d5 = d5 - d7; /* third stage */ d7 = d8 + d3; d8 = d8 - d3; d3 = d0 + d2; d0 = d0 - d2; d2 = ((d4 + d5) * 181) >> 8; d4 = ((d4 - d5) * 181) >> 8; /* output stage */ block[offset + DCTSIZE*0] = (d7 + d1) >> (CONST_BITS + PASS_BITS); block[offset + DCTSIZE*7] = (d7 - d1) >> (CONST_BITS + PASS_BITS); block[offset + DCTSIZE*1] = (d3 + d2) >> (CONST_BITS + PASS_BITS); block[offset + DCTSIZE*6] = (d3 - d2) >> (CONST_BITS + PASS_BITS); block[offset + DCTSIZE*2] = (d0 + d4) >> (CONST_BITS + PASS_BITS); block[offset + DCTSIZE*5] = (d0 - d4) >> (CONST_BITS + PASS_BITS); block[offset + DCTSIZE*3] = (d8 + d6) >> (CONST_BITS + PASS_BITS); block[offset + DCTSIZE*4] = (d8 - d6) >> (CONST_BITS + PASS_BITS); } } } /** * Motion vector information used for MPEG-1 motion prediction. */ class MotionVector { /** * Motion vector displacements (6 bits) */ public int horizontal; public int vertical; /** * Motion vector displacement residual size */ private int residualSize; /** * Motion displacement in half or full pixels? */ private boolean pixelSteps; /** * Initializes the motion vector object */ public MotionVector() { horizontal = vertical = 0; residualSize = 0; pixelSteps = false; } /** * Changes the current motion vector predictors */ public void setVector(int horizontal, int vertical) { this.horizontal = horizontal; this.vertical = vertical; } /** * Reads the picture motion vector information */ public void getMotionInfo(BitInputStream stream) throws IOException { pixelSteps = (stream.getBits(1) != 0); residualSize = (stream.getBits(3) - 1); } /** * Reads the macro block motion vectors */ public void getMotionVector(VLCInputStream stream) throws IOException { horizontal = getMotionDisplacement(stream, horizontal); vertical = getMotionDisplacement(stream, vertical); } /** * Reads and reconstructs the motion vector displacements */ private int getMotionDisplacement(VLCInputStream stream, int vector) throws IOException { int code = stream.getMVCode(); int residual = (code != 0 && residualSize != 0 ? stream.getBits(residualSize) : 0); int limit = 16 << residualSize; if (pixelSteps) vector >>= 1; if (code > 0) { if ((vector += ((code - 1) << residualSize) + residual + 1) >= limit) vector -= limit << 1; } else if (code < 0) { if ((vector -= ((-code - 1) << residualSize) + residual + 1) < -limit) vector += limit << 1; } if (pixelSteps) vector <<= 1; return vector; } } /** * Macroblock decoder and dequantizer for MPEG-1 video streams. */ class Macroblock { /** * Macroblock type encoding */ public final static int I_TYPE = 1; public final static int P_TYPE = 2; public final static int B_TYPE = 3; public final static int D_TYPE = 4; /** * Macroblock type bit fields */ public final static int INTRA = 0x01; public final static int PATTERN = 0x02; public final static int BACKWARD = 0x04; public final static int FORWARD = 0x08; public final static int QUANT = 0x10; public final static int EMPTY = 0x80; /** * Default quantization matrix for intra coded macro blocks */ private final static int defaultIntraMatrix[] = { 8, 16, 16, 19, 16, 19, 22, 22, 22, 22, 22, 22, 26, 24, 26, 27, 27, 27, 26, 26, 26, 26, 27, 27, 27, 29, 29, 29, 34, 34, 34, 29, 29, 29, 27, 27, 29, 29, 32, 32, 34, 34, 37, 38, 37, 35, 35, 34, 35, 38, 38, 40, 40, 40, 48, 48, 46, 46, 56, 56, 58, 69, 69, 83 }; /** * Mapping for zig-zag scan ordering */ private final static int zigzag[] = { 0, 1, 8, 16, 9, 2, 3, 10, 17, 24, 32, 25, 18, 11, 4, 5, 12, 19, 26, 33, 40, 48, 41, 34, 27, 20, 13, 6, 7, 14, 21, 28, 35, 42, 49, 56, 57, 50, 43, 36, 29, 22, 15, 23, 30, 37, 44, 51, 58, 59, 52, 45, 38, 31, 39, 46, 53, 60, 61, 54, 47, 55, 62, 63 }; /** * Quantization scale for macro blocks */ private int scale; /** * Quantization matrix for intra coded blocks */ private int intraMatrix[]; /** * Quantization matrix for inter coded blocks */ private int interMatrix[]; /** * Color components sample blocks (8 bits) */ private int block[][]; /** * Predictors for DC coefficients (10 bits) */ private int predictor[]; /** * Macroblock type encoding */ private int type; /** * Macroblock type flags */ private int flags; /** * Motion prediction vectors */ private MotionVector forward; private MotionVector backward; /** * Initializes the MPEG-1 macroblock decoder object */ public Macroblock() { /* create quantization matrices */ intraMatrix = new int[64]; interMatrix = new int[64]; /* create motion prediction vectors */ forward = new MotionVector(); backward = new MotionVector(); /* create DCT blocks and predictors */ block = new int[6][64]; predictor = new int[3]; /* set up default macro block types */ type = I_TYPE; flags = EMPTY; /* set up default quantization scale */ scale = 0; /* set up default quantization matrices */ for (int i = 0; i < 64; i++) { intraMatrix[i] = defaultIntraMatrix[i]; interMatrix[i] = 16; } /* set up default DC coefficient predictors */ for (int i = 0; i < 3; i++) predictor[i] = 1024; } /** * Returns the quantization scale */ public int getScale() { return scale; } /** * Changes the quantization scale */ public void setScale(int scale) { this.scale = scale; } /** * Returns the quantization matrix for intra coded blocks */ public int[] getIntraMatrix() { return intraMatrix; } /** * Changes the quantization matrix for intra coded blocks */ public void setIntraMatrix(int matrix[]) { for (int i = 0; i < 64; i++) intraMatrix[i] = matrix[i]; } /** * Returns the quantization matrix for inter coded blocks */ public int[] getInterMatrix() { return interMatrix; } /** * Changes the quantization matrix for inter coded blocks */ public void setInterMatrix(int matrix[]) { for (int i = 0; i < 64; i++) interMatrix[i] = matrix[i]; } /** * Returns the macroblock type encoding */ public int getType() { return type; } /** * Changes the macroblock type encoding */ public void setType(int type) { this.type = type; } /** * Returns the component block of samples */ public int[][] getData() { return block; } /** * Changes the component block of samples */ public void setData(int component, int data[]) { for (int i = 0; i < 64; i++) block[component][i] = data[i]; } /** * Returns the macro block type flags */ public int getFlags() { return flags; } /** * Changes the macro block type flags */ public void setFlags(int flags) { this.flags = flags; } /** * Returns true if the block is empty */ public boolean isEmpty() { return (flags & EMPTY) != 0; } /** * Returns true if the block is intra coded */ public boolean isIntraCoded() { return (flags & INTRA) != 0; } /** * Returns true if the block is pattern coded */ public boolean isPatternCoded() { return ((flags & PATTERN) != 0); } /** * Returns true if the block is forward predicted */ public boolean isBackwardPredicted() { return (flags & BACKWARD) != 0; } /** * Returns true if the block is backward predicted */ public boolean isForwardPredicted() { return (flags & FORWARD) != 0; } /** * Returns true if the block is forward and backward predicted */ public boolean isBidirPredicted() { return ((flags & FORWARD) != 0) && ((flags & BACKWARD) != 0); } /** * Returns true if the block has a quantization scale */ public boolean isQuantScaled() { return ((flags & QUANT) != 0); } /** * Returns the forward motion vector */ public MotionVector getForwardVector() { return forward; } /** * Returns the backward motion vector */ public MotionVector getBackwardVector() { return backward; } /** * Resets the DCT coefficient predictors */ public void resetDataPredictors() { for (int i = 0; i < 3; i++) predictor[i] = 1024; } /** * Resets the motion vector predictors */ public void resetMotionVectors() { forward.setVector(0, 0); backward.setVector(0, 0); } /** * Parses the next encoded MPEG-1 macroblock (according to ISO 11172-2) * decoding and dequantizing the DCT coefficient component blocks. */ public void getMacroblock(VLCInputStream stream) throws IOException { /* read macro block bit flags */ switch (getType()) { case I_TYPE: setFlags(stream.getIMBCode()); break; case P_TYPE: setFlags(stream.getPMBCode()); if (!isForwardPredicted()) resetMotionVectors(); if (!isIntraCoded()) resetDataPredictors(); break; case B_TYPE: setFlags(stream.getBMBCode()); if (isIntraCoded()) resetMotionVectors(); else resetDataPredictors(); break; } /* read quantization scale */ if (isQuantScaled()) { setScale(stream.getBits(5)); } /* read forward motion vector */ if (isForwardPredicted()) { getForwardVector().getMotionVector(stream); } /* read backward motion vector */ if (isBackwardPredicted()) { getBackwardVector().getMotionVector(stream); } /* read block pattern code */ int pattern = 0; if (isPatternCoded()) { pattern = stream.getCBPCode(); } /* clear DCT coefficients blocks */ for (int i = 0; i < 6; i++) { for (int j = 0; j < 64; j++) { block[i][j] = 0; } } /* read DCT coefficient blocks */ if (isIntraCoded()) { /* read intra coded DCT coefficients */ for (int i = 0; i < 6; i++) { block[i][0] = predictor[i < 4 ? 0 : i - 3]; getIntraBlock(stream, block[i], i); predictor[i < 4 ? 0 : i - 3] = block[i][0]; IDCT.transform(block[i]); } } else { /* read inter coded DCT coefficients */ for (int i = 0; i < 6; i++) { if ((pattern & (1 << (5 - i))) != 0) { getInterBlock(stream, block[i]); IDCT.transform(block[i]); } } } } /** * Parses an intra coded MPEG-1 block (according to ISO 11172-2) decoding * and dequantizing the DC and AC coefficients stored in zig zag order. */ private void getIntraBlock(VLCInputStream stream, int block[], int component) throws IOException { /* decode DC coefficients */ if (component < 4) block[0] += stream.getIntraDCLumValue() << 3; else block[0] += stream.getIntraDCChromValue() << 3; /* decode AC coefficients */ for (int i = 1; i <= block.length; i++) { int data[] = stream.getACValue(); if (data[0] == stream.EOB) break; int position = zigzag[i = (i + data[0]) & 63]; block[position] = (data[1] * scale * intraMatrix[i]) >> 3; } } /** * Parses a inter coded MPEG-1 block (according to ISO 11172-2) decoding * and dequantizing the DC and AC coefficients stored in zig zag order. */ private void getInterBlock(VLCInputStream stream, int block[]) throws IOException { /* decode DC and AC coefficients */ for (int i = 0; i <= block.length; i++) { int data[] = (i == 0 ? stream.getInterDCValue() : stream.getACValue()); if (data[0] == stream.EOB) break; data[1] += ((data[1] >> 31) << 1) + 1; int position = zigzag[i = (i + data[0]) & 63]; block[position] = (data[1] * scale * interMatrix[i]) >> 3; } } } /** * Picture decoder for MPEG-1 video streams according to ISO 11172-2. */ class Picture { /** * Picture current and predictors frame buffers */ private int frameBuffer[]; private int forwardBuffer[]; private int backwardBuffer[]; /** * Macroblock decoder and dequantizer */ private Macroblock macroblock; /** * Dimension of the picture in pixels */ private int width, height; /** * Dimension of the picture in macro blocks */ private int mbColumns, mbRows; /** * Picture temporal reference number */ private int number; /** * Picture synchronization delay (1/90000s ticks) */ private int delay; /** * Constructs a MPEG-1 video stream picture */ public Picture() { /* constructs the macroblock */ macroblock = new Macroblock(); /* initialize temporal fields */ number = 0; delay = 0; /* initialize dimension and frame buffers */ width = height = 0; mbColumns = mbRows = 0; frameBuffer = null; forwardBuffer = null; backwardBuffer = null; } /** * Changes the picture dimensions */ public void setSize(int width, int height) { /* set up picture dimension */ this.width = width; this.height = height; /* compute dimension in macro blocks */ mbColumns = (width + 15) >> 4; mbRows = (height + 15) >> 4; /* allocate frame buffers */ frameBuffer = new int[256 * mbRows * mbColumns]; forwardBuffer = new int[256 * mbRows * mbColumns]; backwardBuffer = new int[256 * mbRows * mbColumns]; } /** * Returns the picture temporal reference */ public int getNumber() { return number; } /** * Changes the picture temporal reference */ public void setNumber(int number) { this.number = number; } /** * Returns the picture temporal delay */ public int getDelay() { return delay; } /** * Changes the picture temporal delay */ public void setDelay(int delay) { this.delay = delay; } /** * Returns the macro block object */ public Macroblock getMacroblock() { return macroblock; } /** * Returns the dimension of the picture in pixels */ public int getWidth() { return width; } public int getHeight() { return height; } public int getStride() { return mbColumns << 4; } /** * Returns the last picture of the MPEG-1 video stream */ public int[] getLastFrame() { return backwardBuffer; } /** * Parses the next picture from the MPEG-1 video stream */ public int[] getFrame(VLCInputStream stream) throws IOException { /* read picture temporal reference */ setNumber(stream.getBits(10)); /* read picture encoding type */ macroblock.setType(stream.getBits(3)); /* read VBV delay of this picture */ setDelay(stream.getBits(16)); /* read forward motion information */ if (macroblock.getType() != Macroblock.I_TYPE) { macroblock.getForwardVector().getMotionInfo(stream); } /* read backward motion information */ if (macroblock.getType() == Macroblock.B_TYPE) { macroblock.getBackwardVector().getMotionInfo(stream); } /* skip extra bit information */ while (stream.getBits(1) != 0) stream.flushBits(8); /* skip extensions and user data chunks */ while (stream.showCode() == BitInputStream.EXT_START_CODE || stream.showCode() == BitInputStream.USER_START_CODE) { stream.getCode(); } /* update forward frame buffer */ if (macroblock.getType() != Macroblock.B_TYPE) { int buffer[] = forwardBuffer; forwardBuffer = backwardBuffer; backwardBuffer = buffer; } /* parse picture slices */ while (stream.showCode() >= BitInputStream.SLICE_MIN_CODE && stream.showCode() <= BitInputStream.SLICE_MAX_CODE) { getSlice(stream, stream.getCode()); } /* update backward frame buffer */ if (macroblock.getType() != Macroblock.B_TYPE) { int buffer[] = backwardBuffer; backwardBuffer = frameBuffer; frameBuffer = buffer; return forwardBuffer; } return frameBuffer; } /** * Parses the next picture slice from the MPEG-1 video stream */ private void getSlice(VLCInputStream stream, int code) throws IOException { /* compute macro block address */ int address = (code - BitInputStream.SLICE_MIN_CODE) * mbColumns - 1; /* read slice quantization scale */ macroblock.setScale(stream.getBits(5)); /* skip extra bit information */ while (stream.getBits(1) != 0) { stream.flushBits(8); } /* reset DCT predictors and motion vectors */ macroblock.setFlags(Macroblock.EMPTY); macroblock.resetDataPredictors(); macroblock.resetMotionVectors(); /* parse slice macro blocks */ while (stream.showBits(23) != 0) { /* get macro block address increment */ int lastAddress = address + stream.getMBACode(); /* handle skipped macro blocks */ if (macroblock.isEmpty()) { /* handle the first macro block address in the slice */ address = lastAddress; } else { while (++address < lastAddress) { /* assume inter coded macro block with zero coefficients */ macroblock.resetDataPredictors(); /* use previous motion vectors or zero in P-picture macro blocks */ if (macroblock.getType() == Macroblock.P_TYPE) macroblock.resetMotionVectors(); /* process skipped macro block */ if (macroblock.isBidirPredicted()) { motionPrediction(address, forwardBuffer, backwardBuffer, macroblock.getForwardVector(), macroblock.getBackwardVector()); } else if (macroblock.isBackwardPredicted()) { motionPrediction(address, backwardBuffer, macroblock.getBackwardVector()); } else { motionPrediction(address, forwardBuffer, macroblock.getForwardVector()); } } } /* decode macro block */ macroblock.getMacroblock(stream); /* process macro block */ if (macroblock.isIntraCoded()) { motionPrediction(address, macroblock.getData()); } else { if (macroblock.isBidirPredicted()) { motionPrediction(address, forwardBuffer, backwardBuffer, macroblock.getForwardVector(), macroblock.getBackwardVector()); } else if (macroblock.isBackwardPredicted()) { motionPrediction(address, backwardBuffer, macroblock.getBackwardVector()); } else { motionPrediction(address, forwardBuffer, macroblock.getForwardVector()); } motionCompensation(address, macroblock.getData()); } } } private void motionPrediction(int address, int sourceBuffer[], MotionVector vector) { int width = mbColumns << 4; int offset = ((address % mbColumns) + width * (address / mbColumns)) << 4; int deltaA = (vector.horizontal >> 1) + width * (vector.vertical >> 1); int deltaB = (vector.horizontal & 1) + width * (vector.vertical & 1); if (deltaB == 0) { for (int i = 0; i < 16; i++) { System.arraycopy(sourceBuffer, offset + deltaA, frameBuffer, offset, 16); offset += width; } } else { deltaB += deltaA; for (int i = 0; i < 16; i++) { for (int j = 0; j < 16; j++) { int d0 = sourceBuffer[offset + deltaA]; int d1 = sourceBuffer[offset + deltaB]; int d2 = (d0 & 0xfefefe) + (d1 & 0xfefefe); int d3 = (d0 & d1) & 0x010101; frameBuffer[offset++] = (d2 >> 1) + d3; } offset += width - 16; } } } private void motionPrediction(int address, int sourceBufferA[], int sourceBufferB[], MotionVector vectorA, MotionVector vectorB) { int width = mbColumns << 4; int offset = ((address % mbColumns) + width * (address / mbColumns)) << 4; int deltaA = (vectorA.horizontal >> 1) + width * (vectorA.vertical >> 1); int deltaB = (vectorB.horizontal >> 1) + width * (vectorB.vertical >> 1); int deltaC = (vectorA.horizontal & 1) + width * (vectorA.vertical & 1); int deltaD = (vectorB.horizontal & 1) + width * (vectorB.vertical & 1); if (deltaC == 0 && deltaD == 0) { for (int i = 0; i < 16; i++) { for (int j = 0; j < 16; j++) { int d0 = sourceBufferA[offset + deltaA]; int d1 = sourceBufferB[offset + deltaB]; int d2 = (d0 & 0xfefefe) + (d1 & 0xfefefe); int d3 = (d0 & d1) & 0x010101; frameBuffer[offset++] = (d2 >> 1) + d3; } offset += width - 16; } } else { deltaC += deltaA; deltaD += deltaB; for (int i = 0; i < 16; i++) { for (int j = 0; j < 16; j++) { int d0 = sourceBufferA[offset + deltaA]; int d1 = sourceBufferB[offset + deltaB]; int d2 = sourceBufferA[offset + deltaC]; int d3 = sourceBufferB[offset + deltaD]; int d4 = ((d0 & 0xfcfcfc) + (d1 & 0xfcfcfc) + (d2 & 0xfcfcfc) + (d3 & 0xfcfcfc)); int d5 = (d0 + d1 + d2 + d3 - d4) & 0x040404; frameBuffer[offset++] = (d4 + d5) >> 2; } offset += width - 16; } } } private void motionPrediction(int address, int block[][]) { int width, offset, index; /* compute macroblock address */ width = mbColumns << 4; address = ((address % mbColumns) + width * (address / mbColumns)) << 4; /* reconstruct luminance blocks */ offset = address; index = 0; for (int i = 0; i < 8; i++) { for (int j = 0; j < 8; j++) { frameBuffer[offset] = clip[512 + block[0][index]]; frameBuffer[offset+8] = clip[512 + block[1][index]]; offset++; index++; } offset += width - 8; } offset = address + (width << 3); index = 0; for (int i = 0; i < 8; i++) { for (int j = 0; j < 8; j++) { frameBuffer[offset] = clip[512 + block[2][index]]; frameBuffer[offset+8] = clip[512 + block[3][index]]; offset++; index++; } offset += width - 8; } /* reconstruct chrominance blocks */ offset = address; index = 0; for (int i = 0; i < 8; i++) { for (int j = 0; j < 8; j++) { int Cb = clip[512 + block[4][index]]; int Cr = clip[512 + block[5][index]]; int CbCr = (Cb << 8) + (Cr << 16); frameBuffer[offset++] += CbCr; frameBuffer[offset++] += CbCr; offset += width - 2; frameBuffer[offset++] += CbCr; frameBuffer[offset++] += CbCr; offset -= width; index++; } offset += width + width - 16; } } private void motionCompensation(int address, int block[][]) { int width, offset, index; /* compute macroblock address */ width = mbColumns << 4; address = ((address % mbColumns) + width * (address / mbColumns)) << 4; /* reconstruct luminance blocks */ offset = index = 0; for (int i = 0; i < 8; i++) { for (int j = 0; j < 8; j++) { data[offset] = block[0][index]; data[offset+8] = block[1][index]; data[offset+128] = block[2][index]; data[offset+8+128] = block[3][index]; offset++; index++; } offset += 8; } /* reconstruct chrominance blocks */ offset = index = 0; for (int i = 0; i < 8; i++) { for (int j = 0; j < 8; j++) { int Y, YCbCr; int Cb = 512 + block[4][index]; int Cr = 512 + block[5][index]; Y = 512 + data[offset++]; YCbCr = frameBuffer[address]; frameBuffer[address++] = (clip[((YCbCr >> 0) & 0xFF) + Y ] << 0) + (clip[((YCbCr >> 8) & 0xFF) + Cb] << 8) + (clip[((YCbCr >> 16) & 0xFF) + Cr] << 16); Y = 512 + data[offset++]; YCbCr = frameBuffer[address]; frameBuffer[address++] = (clip[((YCbCr >> 0) & 0xFF) + Y ] << 0) + (clip[((YCbCr >> 8) & 0xFF) + Cb] << 8) + (clip[((YCbCr >> 16) & 0xFF) + Cr] << 16); offset += 16-2; address += width-2; Y = 512 + data[offset++]; YCbCr = frameBuffer[address]; frameBuffer[address++] = (clip[((YCbCr >> 0) & 0xFF) + Y ] << 0) + (clip[((YCbCr >> 8) & 0xFF) + Cb] << 8) + (clip[((YCbCr >> 16) & 0xFF) + Cr] << 16); Y = 512 + data[offset++]; YCbCr = frameBuffer[address]; frameBuffer[address++] = (clip[((YCbCr >> 0) & 0xFF) + Y ] << 0) + (clip[((YCbCr >> 8) & 0xFF) + Cb] << 8) + (clip[((YCbCr >> 16) & 0xFF) + Cr] << 16); offset -= 16; address -= width; index++; } offset += 16; address += width+width-16; } } private static int data[] = new int[256]; private static int clip[] = new int[1024]; static { for (int i = 0; i < 1024; i++) { clip[i] = Math.min(Math.max(i - 512, 0), 255); } } } /** * The MPEG-1 video stream decoder according to ISO 11172-2. */ class MPEGVideoStream { /** * MPEG-1 video frame rate table */ private static final int frameRateTable[] = { 30, 24, 24, 25, 30, 30, 50, 60, 60, 12, 30, 30, 30, 30, 30, 30 }; /** * MPEG-1 video stream frame rate (frames per second) */ private int frameRate; /** * MPEG-1 video stream bit rate (bits per second) */ private int bitRate; /** * MPEG-1 video stream VBV buffer size (16 kbit steps) */ private int bufferSize; /** * MPEG-1 video stream time record (in frames) */ private int hour, minute, second, frame; /** * MPEG-1 video stream current picture */ private Picture picture; /** * MPEG-1 video stream boolean fields */ private boolean constrained, dropped, closed, broken; /** * The last MPEG-1 picture frame parsed */ private int frameBuffer[]; /** * The underlaying VLC input stream */ private VLCInputStream stream; /** * Initializes the MPEG-1 video input stream object */ public MPEGVideoStream(InputStream inputStream) throws IOException { /* set ups the VLC input stream */ stream = new VLCInputStream(inputStream); /* set ups the picture decoder */ picture = new Picture(); /* reset rates and buffer size */ frameRate = 0; bitRate = 0; bufferSize = 0; /* reset time record */ hour = 0; minute = 0; second = 0; frame = 0; /* reset boolean fields */ constrained = false; dropped = false; closed = false; broken = false; /* reset last frame buffer */ frameBuffer = getFrame(); } /** * Returns the MPEG-1 video stream frame rate */ public int getFrameRate() { return frameRate; } /** * Changes the MPEG-1 video stream frame rate */ public void setFrameRate(int rate) { frameRate = rate; } /** * Returns the MPEG-1 video stream bit rate */ public int getBitRate() { return bitRate; } /** * Changes the MPEG-1 video stream bit rate */ public void setBitRate(int rate) { bitRate = rate; } /** * Returns the MPEG-1 video stream VBV buffer size */ public int getBufferSize() { return bufferSize; } /** * Changes the MPEG-1 video stream VBV buffer size */ public void setBufferSize(int size) { bufferSize = size; } /** * Returns the MPEG-1 video stream time record */ public long getTime() { return frame + getFrameRate() * (second + 60L * minute + 3600L * hour); } /** * Changes the MPEG-1 video stream time record */ public void setTime(int hour, int minute, int second, int frame) { this.hour = hour; this.minute = minute; this.second = second; this.frame = frame; } /** * Returns true if the video parameters are constrained */ public boolean isConstrained() { return constrained; } /** * Enables or disables the video parameters constrains */ public void setConstrained(boolean constrained) { this.constrained = constrained; } /** * Returns true if the group of pictures drops frames */ public boolean isDropped() { return dropped; } /** * Changes the dropped flag of the group of pictures */ public void setDropped(boolean dropped) { this.dropped = dropped; } /** * Returns true if the group of pictures is closed */ public boolean isClosed() { return closed; } /** * Changes the closed flag of the group of pictures */ public void setClosed(boolean closed) { this.closed = closed; } /** * Returns true if there is a broken link */ public boolean isBroken() { return broken; } /** * Changes the broken flag of the group of pictures */ public void setBroken(boolean broken) { this.broken = broken; } /** * Returns the MPEG-1 video picture dimensions */ public int getWidth() { return picture.getWidth(); } public int getHeight() { return picture.getHeight(); } public int getStride() { return picture.getStride(); } /** * Parses the next frame of the MPEG-1 video stream */ public int[] getFrame() throws IOException { while (stream.showCode() != BitInputStream.SEQ_END_CODE) { switch (stream.getCode()) { case BitInputStream.SEQ_START_CODE: getSequenceHeader(stream); break; case BitInputStream.GOP_START_CODE: getGroupPictures(stream); break; case BitInputStream.PIC_START_CODE: return getPictureFrame(stream); case BitInputStream.USER_START_CODE: case BitInputStream.EXT_START_CODE: break; default: // throw new IOException("Unknown MPEG-1 video layer start code"); break; } } if (frameBuffer != picture.getLastFrame()) { frameBuffer = picture.getLastFrame(); return frameBuffer; } return null; } /** * Parses the sequence header from the MPEG-1 video stream */ private void getSequenceHeader(VLCInputStream stream) throws IOException { /* read picture dimensions in pixels */ int width = stream.getBits(12); int height = stream.getBits(12); int aspectRatio = stream.getBits(4); /* changes the MPEG-1 picture dimension */ if (picture.getWidth() == 0 && picture.getHeight() == 0) picture.setSize(width, height); /* read picture and bit rates */ setFrameRate(frameRateTable[stream.getBits(4)]); setBitRate(400 * stream.getBits(18)); stream.getBits(1); /* read VBV buffer size */ setBufferSize(stream.getBits(10)); /* read constrained parameters flag */ setConstrained(stream.getBits(1) != 0); /* read quantization matrix for intra coded blocks */ int intraMatrix[] = picture.getMacroblock().getIntraMatrix(); if (stream.getBits(1) != 0) { for (int i = 0; i < 64; i++) intraMatrix[i] = stream.getBits(8); } /* read quantization matrix for inter coded blocks */ int interMatrix[] = picture.getMacroblock().getInterMatrix(); if (stream.getBits(1) != 0) { for (int i = 0; i < 64; i++) interMatrix[i] = stream.getBits(8); } } /** * Parses group of pictures header from the MPEG-1 video stream */ private void getGroupPictures(VLCInputStream stream) throws IOException { /* read the drop frame flag */ setDropped(stream.getBits(1) != 0); /* read the time record */ int hour = stream.getBits(5); int minute = stream.getBits(6); int marker = stream.getBits(1); int second = stream.getBits(6); int frame = stream.getBits(6); setTime(hour, minute, second, frame); /* read closed and broken link flags */ setClosed(stream.getBits(1) != 0); setBroken(stream.getBits(1) != 0); } /** * Parses the next picture from the MPEG-1 video stream */ private int[] getPictureFrame(VLCInputStream stream) throws IOException { return picture.getFrame(stream); } } /** * The MPEG-1 video stream decoder applet that is intended to run * embedded inside of a Web page or another application. */ public class MPEGPlayer extends Applet implements Runnable { /** * The MPEG-1 video input stream */ private MPEGVideoStream stream; /** * The picture frame buffer */ private int pixels[], width, height, stride; /** * The memory image color model */ private DirectColorModel model = null; /** * The memory image source */ private MemoryImageSource source = null; /** * The memory image object */ private Image image = null; /** * The applet's execution thread */ private Thread kicker = null; /** * The video stream location */ private URL url = null; /** * The repeat boolean parameter */ private boolean repeat = true; /** * Applet information */ public String getAppletInfo() { return "MPEGPlayer 0.9 (15 Apr 1998), Carlos Hasan ([email protected])"; } /** * Parameters information */ public String[][] getParameterInfo() { String info[][] = { { "source", "URL", "MPEG-1 video stream location" }, { "repeat", "boolean", "repeat the video sequence" } }; return info; } /** * Applet initialization */ public void init() { try { if (getParameter("source") != null) url = new URL(getDocumentBase(), getParameter("source")); if (getParameter("repeat") != null) repeat = getParameter("repeat").equalsIgnoreCase("true"); } catch (MalformedURLException exception) { showStatus("MPEG Exception: " + exception); } } /** * Start the execution of the applet */ public void start() { if (kicker == null && url != null) { kicker = new Thread(this); kicker.start(); } showStatus(getAppletInfo()); } /** * Stop the execution of the applet */ public void stop() { if (kicker != null && kicker.isAlive()) { kicker.stop(); } kicker = null; } /** * The applet main execution code */ public void run() { int frame[]; long time; try { do { stream = new MPEGVideoStream(url.openStream()); width = stream.getWidth(); height = stream.getHeight(); stride = stream.getStride(); resize(width, height); pixels = new int[stride * height]; model = new DirectColorModel(24, 0x0000ff, 0x00ff00, 0xff0000); time = System.currentTimeMillis(); while ((frame = readFrame()) != null) { drawFrame(frame, width, height, stride); source = new MemoryImageSource(width, height, model,pixels,0,stride); image = createImage(source); paint(getGraphics()); time += 1000L / stream.getFrameRate(); try { Thread.sleep(Math.max(time - System.currentTimeMillis(), 0)); } catch (InterruptedException exception) { } image.flush(); } } while (repeat); } catch (IOException exception) { showStatus("MPEG I/O Exception: " + exception); } } /** * Paint the current frame */ public void paint(Graphics graphics) { if (image != null) graphics.drawImage(image, 0, 0, null); } /** * Reads the next MPEG-1 video frame */ private int[] readFrame() { while (true) { try { return stream.getFrame(); } catch (Exception exception) { showStatus("MPEG Exception: " + exception); } } } /** * Draws the current MPEG-1 video frame */ private void drawFrame(int frame[], int width, int height, int stride) { int offset = 0; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { int YCbCr = frame[offset]; int Y = 512 + ((YCbCr >> 0) & 0xff); int Cb = cbtable[(YCbCr >> 8) & 0xff]; int Cr = crtable[(YCbCr >> 16) & 0xff]; pixels[offset++] = (clip[Y + (Cr >> 16)] << 0) + (clip[Y + (((Cb + Cr) << 16) >> 16)] << 8) + (clip[Y + (Cb >> 16)] << 16); } offset += stride - width; } } /** * Color conversion lookup tables */ private static int clip[], cbtable[], crtable[]; static { clip = new int[1024]; cbtable = new int[256]; crtable = new int[256]; for (int i = 0; i < 1024; i++) { clip[i] = Math.min(Math.max(i - 512, 0), 255); } for (int i = 0; i < 256; i++) { int level = i - 128; cbtable[i] = (((int)(1.77200 * level)) << 16) - ((int)(0.34414 * level)); crtable[i] = (((int)(1.40200 * level)) << 16) - ((int)(0.71414 * level)); } } }

2012-04-20

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除