自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(115)
  • 资源 (1)
  • 收藏
  • 关注

原创 rabbitmq基础

product.pyimport pikaconnection = pika.BlockingConnection(pika.ConnectionParameters("localhost"))channel = connection.channel()channel.queue_declare(queue="hello")channel.basic_publish(exchange="",rout

2017-10-31 13:02:35 362

原创 numPy基础知识

from numpy import *a = numpy.arange(10) ** 2aarray([ 0, 1, 4, 9, 16, 25, 36, 49, 64, 81])b = numpy.arange(10) ** 3barray([ 0, 1, 8, 27, 64, 125, 216, 343, 512, 729])c = a + bcarray([ 0,

2017-10-27 00:06:43 595

原创 pymysql 基本操作

In [2]: conn = pymysql.connect(host="127.0.0.1",user="root",passwd="osyunwei")In [3]: conn.query("create database pymysql")Out[3]: 1In [5]: conn = pymysql.connect(host="127.0.0.1",user="root",passwd=

2017-10-26 14:22:32 1115

原创 scrapy处理个中文本格式HTML,XML,CSV

网页html格式class CaoItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() urlname = scrapy.Field() urlkey = scrapy.Field() urlcr = scrapy.Field()

2017-10-26 11:01:33 616

原创 Python3 urllib库爬虫 基础

基础add_header()添加报头url="http://blog.csdn.net/yudiyanwang/article/details/78322039"req = urllib.request.Request(url)req.add_header("User-Agent","Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:56.0) Gecko/2

2017-10-25 18:02:09 322

原创 Pycharm用鼠标滚轮控制字体大小的

Pycharm用鼠标滚轮控制字体大小的一、pycharm字体放大的设置File —> setting —> Keymap —>在搜寻框中输入:increase —> Increase Font Size(双击) —> 在弹出的对话框中选择Add Mouse Shortcut 在弹出的对话框中同时按住ctrl键和鼠标滚轮向上滑。二、Py

2017-10-23 19:10:12 11904 3

原创 Spark Streaming

package storm.streamimport org.apache.spark.streaming.dstream.{DStream, InputDStream, ReceiverInputDStream}import org.apache.spark.streaming.{Seconds, StreamingContext}import org.apache.spark.{Spa

2017-10-19 21:31:32 246

原创 Hadoop HA on Yarn——集群配置

集群搭建因为服务器数量有限,这里服务器开启的进程有点多:机器名  安装软件  运行进程  hadoop001   Hadoop,Zookeeper  NameNode, DFSZKFailoverController, ResourceManagerDataNode, NodeManagerQuorumPeerMa

2017-10-19 12:00:32 264

原创 storm之wordCount

WordCount spoutpackage storm.demo;import org.apache.storm.spout.SpoutOutputCollector;import org.apache.storm.task.TopologyContext;import org.apache.storm.topology.OutputFieldsDeclarer;import org.a

2017-10-18 00:52:01 255

原创 sotrm.yaml配置文件

storm.yaml配置项配置说明storm.zookeeper.serversZooKeeper服务器列表storm.zookeeper.portZooKeeper连接端口storm.local.dirstorm使用的本地文件系统目录(必须存在并且storm进程可读写)storm.cluster.modeSt

2017-10-17 22:42:35 362

原创 scala隐式转换

package test.zhuanhuanimport java.io.Fileimport scala.io.Source// implict def形式的隐式转换object ImplicitDefDemo { object MyImplicitTypeConversion { implicit def strToInt(str: String) =

2017-10-16 22:26:59 318

原创 SparkSQL已编程模式指定Schema

def test5(): Unit ={ val ss: SparkSession = SparkSession.builder().appName("Spark SQL basic example") .config("spark.some.config.option", "some-value").getOrCreate() import ss.

2017-10-16 16:33:46 721

原创 sparkSQL操作结果集

原始数据zhang san,15li si,15wang wu,20zhao liu,22zhang san,42li wu,22li si,20hello world,18hello world,18 /** * 从文本文件中创建Person对象的RDD,将其转换为Dataframe */ @Test def test4(): Unit

2017-10-16 16:11:32 2371

原创 sparkSQL查询操作

原始数据001,goods0001,10,20.00002,goods0001,10,20.00003,goods0002,50,30.00004,goods0001,10,30.00005,goods0003,90,10.00006,goods0002,10,40.00createOrReplaceTempView /** * createOrReplaceTempVi

2017-10-16 15:34:44 2059

原创 sparkSQL操作基本操作

点击edit configuration,在左侧点击该项目。在右侧VM options中输入“-Dspark.master=local”,指示本程序本地单线程运行new.txt001,goods0001,10,20.00002,goods0001,10,20.00003,goods0002,50,30.00004,goods0001,10,30.00005,goods0003,90,10.

2017-10-16 15:10:25 1047

原创 spark之action算子

first() count() reduce() @Test def test3(): Unit ={ val rdd1: RDD[(String, Int)] = sc.makeRDD(Array(("A",1),("B",2),("C",3))) println(rdd1.first()) //返回第一个元素 ("A",1) print

2017-10-15 20:29:37 311

原创 spark之设置检查点

cache: 将数据缓存到内存 @Test def test1(): Unit ={ val rdd1: RDD[String] = sc.textFile("hdfs://192.168.8.128:9000/test/README.txt") //进行缓存 val rdd2: RDD[String] = rdd1.cache()

2017-10-15 19:38:51 1191

原创 spark的转换算子操作

package os.Streamingimport org.apache.spark.rdd.RDDimport org.apache.spark.{Partition, SparkConf, SparkContext}import org.junitimport org.junit.{Before, Test}import scala.collection.mutableclass St

2017-10-15 17:21:48 371

原创 Spark 的JAVA版 wordCount

package os.unix;import org.apache.spark.SparkConf;import org.apache.spark.api.java.JavaPairRDD;import org.apache.spark.api.java.JavaRDD;import org.apache.spark.api.java.JavaSparkContext;import org.

2017-09-29 11:59:24 291

原创 SparSQL版 wordCount

原始数据hello wordsord RDDRDD hellohello worldhello c++hello worldworld ni hao输出结果+-----+------+| word|counts|+-----+------+|hello| 5||world| 3|| RDD| 2|| hao| 1|| sord|

2017-09-27 12:33:54 1007

原创 hbase客户端查询API

/* * 查询数据 */ @Test public void testGet() throws IOException { Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum","os-1:2181,os-2:2181,o

2017-09-13 19:55:37 481

原创 hBase客户端API-增删改

package os.hbase.index;import java.io.IOException;import java.util.ArrayList;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hb

2017-09-13 19:24:27 344

原创 hbase客户端api--建表

package os.hbase.index;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.HColumnDescriptor;impor

2017-09-13 18:39:01 366

原创 HBase命令行操作

hbase(main):001:0> create 'user_info',{NAME=>'base_info',VERSIONS=>3},{NAME=>'extra_info'}hbase(main):001:0> put 'user_info' , 'rk001', 'base_info:id','1'hbase(main):001:0> put 'user_info' , 'rk0012',

2017-09-13 18:09:53 402

原创 zookeeper基本操作

package os.zk.demo;import java.io.IOException;import java.io.UnsupportedEncodingException;import java.util.List;import org.apache.hadoop.ha.protocolPB.ZKFCProtocolClientSideTranslatorPB;import org.

2017-09-13 08:45:41 209

原创 分区汇总流量MapReduce

ProviceCountMapper.javapackage os.bigdata.provincflowcount;import java.io.IOException;import org.apache.commons.lang.StringUtils;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.T

2017-09-12 09:42:19 389

原创 流量汇总mapreduce

FlowCountMapper.javapackage os.os.flowcount;import java.io.IOException;import org.apache.commons.lang.StringUtils;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org

2017-09-12 08:11:03 301

原创 wordCount MapReduce

mappackage os.unix.cn;import java.io.IOException;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Mapper;/* * KEYIN:输入数据KV对中的key数据类型 (行的起始

2017-09-11 22:57:49 203

原创 hdfs的一些操作

import java.io.FileNotFoundException;import java.io.IOException;import java.net.URI;import java.net.URISyntaxException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.BlockL

2017-09-11 19:50:26 253

原创 hadoop

./hadoop-daemon.sh start namenode./hadoop-daemon.sh start datanode./hadoop-daemon.sh start secondarynamenode# resourcemanager nodemanager./yarn-daemon.sh start resourcemanager./yarn-daemon.sh s

2017-09-11 19:47:28 160

原创 hadoop

hadoop安装hadoop安装先安装JDK然后下载hadoop包 解压[os@localhost conf]$ pwd/home/os/hadoop-1.2.1/confvim hadoop-env.shexport JAVA_HOME=/usr/local/jdk1.8.0_144#修改core-site.xml[os@localhost conf]$ cat core-site.xm

2017-09-06 22:37:00 195

原创 saltstack

salt 'web' test.pingsalt '*' cmd.run 'df -Th'mkdir /srv/{salt,pillar}cd /srv/saltmkdir /srv/salt/filestouch hosts.sls/etc/hosts: file.managed: - source: salt://files/hosts #本地文件 相对路径 - user

2017-06-28 12:09:00 185

原创 ansible

ansible <host-pattern> [-f forks] [-m module_name] [-a args] -f forks: 启动的线程数 -m module_name: 要使用的模块 -a args: 模块特有的参数 ansible-doc -s copy ansible-doc -l 常见模块: comm

2017-06-28 10:55:49 244

原创 fabric下载文件

#coding:utf-8from fabric.api import *from fabric.contrib.console import confirmfrom fabric.colors import *env.hosts = ['192.168.1.112']env.user = 'root'#env.passwords ={##}@taskdef upload_file(

2017-06-27 14:30:02 1546

原创 fabric常用API

#coding:utf8from fabric.api import *from fabric.colors import *env.hosts = ['192.168.1.112','chang']env.user = 'root'env.password = '1'env.port = 22@taskdef local_cmd(): local('ls -la') w

2017-06-27 14:00:13 3208

原创 fabric

fab -f tset.py show catmem#coding:utf8from fabric.api import *env.hosts = ['192.168.1.112','chang']env.user = 'root'env.password = '1'env.port = 22@taskdef show(): run('hostname') run('nets

2017-06-27 13:38:07 425

原创 node.js使用中间件在网页上面显示置顶目录结构

方法一var finalhandler = require('finalhandler')var http = require('http')var serveIndex = require('serve-index')var serveStatic = require('serve-static')// Serve directory indexes for public/ftp folde

2017-06-24 15:13:18 468

原创 node.js操作MySQL

var mysql = require('mysql')var connection = mysql.createConnection({ host:'localhost', port:3306, database:'test', user:'root', password:'root'})connection.connect(function(err){

2017-06-24 10:58:50 471

原创 node.js的http模块

发送数据var http = require('http')var options = {hostname:'localhost',port:8888,path:'/', method:'POST'}var req = http.request(options)req.write('你好')req.end('再见')接收数据var http = require('http')var ser

2017-06-23 21:50:35 285

原创 flask 与SQLAlchemy

# -*- coding:utf-8 -*-from flask import Flaskimport osfrom flask_sqlalchemy import SQLAlchemyfrom flask_script import Managerapp = Flask(__name__)manager = Manager(app)basedir = os.path.abspath(os.p

2017-06-22 15:44:19 1004

空空如也

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除