自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(3)
  • 资源 (3)
  • 收藏
  • 关注

转载 linux下内存管理

linux下内存管理 2008-12-10 14:47 转载自: http://hi.baidu.com/ywdblog/blog/item/c5a32312800aea56f819b889.html An Abstract Model

2011-07-30 09:07:22 264

转载 Linux平台静态库、动态库的一些笔记

Linux平台静态库、动态库的一些笔记GNU/Linux程序开发作者:李迟   2011-03-24 18:48李迟按:本文约在去年年底在我的csdn博客上发表,重新发表于此,只字未改。那时候很想对库有深一层的认识。并且知道了《程序员的自我修养--链接、装载与库》这本书。这本书如今已经看过一次了,收获不少,但仍需要再次阅读。先声明几点:1、操作系统:linux(fc9)、编译器:gcc-4.3.0、编辑器:包括但不限于emacs、vim。这些无理由也不应造成限制。2、生成的可执行文件名称比较有规律,仅仅是为

2011-05-10 22:28:00 642

转载 ACM PKU 题目分类(完整整理版本)

DP: 1011   NTA                 简单题 1013   Great Equipment     简单题 1024   Calendar Game       简单题 1027   Human Gene Functions   简单题 1037   Gridland            简单题 1052   Algernon s Noxious Emissions 简单题 1409   Communication System   简单题,但是很容易看错~~~ 1425   Cr

2011-04-05 14:27:00 774

NVMe技术标准和原理深度解析.pdf

NVMe技术标准和原理深度解析.pdf

2021-03-07

Fast Data Processing with Spark, 2nd Edition

Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets. Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes. Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python. We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs).

2017-08-31

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除