自定义博客皮肤VIP专享

*博客头图:

格式为PNG、JPG,宽度*高度大于1920*100像素,不超过2MB,主视觉建议放在右侧,请参照线上博客头图

请上传大于1920*100像素的图片!

博客底图:

图片格式为PNG、JPG,不超过1MB,可上下左右平铺至整个背景

栏目图:

图片格式为PNG、JPG,图片宽度*高度为300*38像素,不超过0.5MB

主标题颜色:

RGB颜色,例如:#AFAFAF

Hover:

RGB颜色,例如:#AFAFAF

副标题颜色:

RGB颜色,例如:#AFAFAF

自定义博客皮肤

-+
  • 博客(0)
  • 资源 (27)
  • 收藏
  • 关注

空空如也

darknet 19_448 模型文件

darknet是一个较为轻型的完全基于C与CUDA的开源深度学习框架,其主要特点就是容易安装,没有任何依赖项(OpenCV都可以不用),移植性非常好,支持CPU与GPU两种计算方式。Darknet的优势: darknet完全由C语言实现,没有任何依赖项,当然可以使用OpenCV,但只是用其来显示图片、为了更好的可视化; darknet支持CPU(所以没有GPU也不用紧的)与GPU(CUDA/cuDNN,使用GPU当然更块更好了); 适合用来研究底层,可以更为方便的从底层对其进行改进与扩展

2018-12-02

darknet 19 模型文件

darknet是一个较为轻型的完全基于C与CUDA的开源深度学习框架,其主要特点就是容易安装,没有任何依赖项(OpenCV都可以不用),移植性非常好,支持CPU与GPU两种计算方式。Darknet的优势: darknet完全由C语言实现,没有任何依赖项,当然可以使用OpenCV,但只是用其来显示图片、为了更好的可视化; darknet支持CPU(所以没有GPU也不用紧的)与GPU(CUDA/cuDNN,使用GPU当然更块更好了); 正是因为其较为轻型,没有像TensorFlow那般强大的API,所以给我的感觉就是有另一种味道的灵活性,适合用来研究底层,可以更为方便的从底层对其进行改进与扩展

2018-12-02

darknet模型文件

darknet是一个较为轻型的完全基于C与CUDA的开源深度学习框架,其主要特点就是容易安装,没有任何依赖项(OpenCV都可以不用),移植性非常好,支持CPU与GPU两种计算方式。Darknet的优势: darknet完全由C语言实现,没有任何依赖项,当然可以使用OpenCV,但只是用其来显示图片、为了更好的可视化; darknet支持CPU(所以没有GPU也不用紧的)与GPU(CUDA/cuDNN,使用GPU当然更块更好了); 正是因为其较为轻型,没有像TensorFlow那般强大的API,所以给我的感觉就是有另一种味道的灵活性,适合用来研究底层,可以更为方便的从底层对其进行改进与扩展

2018-12-02

densenet 201 weights

DenseNet在ResNet的基础上(ResNet介绍),进一步扩展网络连接,对于网络的任意一层,该层前面所有层的feature map都是这层的输入,该层的feature map是后面所有层的输入。优点:减轻了梯度消失问题(vanishing-gradient problem);增强了feature map的传播,利用率也上升了(前面层的feature map直接传给后面,利用更充分了);大大减少了参数量

2018-12-02

extraction weights

This model is an offshoot of the GoogleNet model. It doesn't use the "inception" modules, only 1x1 and 3x3 convolutional layers. Top-1 Accuracy: 72.5%,Top-5 Accuracy: 90.8%,Forward Timing: 4.8 ms/img,CPU Forward Timing: 0.97 s/img,cfg file weight file (90 MB)

2018-12-02

Resnet 50 权重文件

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

2018-12-02

Resnet 152 权重文件

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

2018-12-02

yolov3-tiny 权重文件

You only look once (YOLO) is a state-of-the-art, real-time object detection system. On a Pascal Titan X it processes images at 30 FPS and has a mAP of 57.9% on COCO test-dev

2018-12-02

pydot-1.3.0-py2.py3-none-any.whl

pydot-1.3.0-py2.py3-none-any,与 graphviz 一起使用

2018-11-25

pydot_ng软件

pyparsing: pydot requires the pyparsing module in order to be able to load DOT files. GraphViz: is needed in order to render the graphs into any of the plethora of output formats supported.

2018-11-25

Pascal VOC 2007数据集(用于物体检测)

Pascal VOC 2007数据集(用于物体检测),可用于检验 YOLO、Fast-RCNN 等算法

2018-11-25

ZeroMQ 使用指南(最新的版本)

ZeroMQ 使用指南,最新的版本,从入门到精通。支持请求应答、发布订阅等模式

2018-11-25

inception v3 (th)的深度学习模型权重文件

inception v3 (th)的深度学习模型权重文件,可作为预训练模型,提升学习效率

2018-11-15

inception v3 (包括顶层)的深度学习模型权重文件

inception V3 的深度学习模型权重文件,可作为预训练模型,提升学习效率

2018-11-15

残差网络resnet50的深度学习模型权重文件

残差网络resnet50的深度学习模型权重文件,可作为预训练模型,提升学习效率

2018-11-15

inception v3 的深度学习模型权重文件

inception v3 的深度学习模型权重文件,可用于预训练,提升学习效率

2018-11-15

inception的深度学习模型权重文件

inception的深度学习模型权重文件,可用于作为预训练模型,提升学习效率

2018-11-15

xception的深度学习模型权重文件,可用于作为预训练模型,提升学习效率

xception的深度学习模型权重文件,可用于作为预训练模型,提升学习效率

2018-11-15

VGG19的深度学习模型文件

VGG19的深度学习模型权重文件,可用于作为预训练模型,提升学习效率

2018-10-31

Python标准库

本书简要地介绍标准库的每个模块(共计200个以上),并提供至少一个例子来说明如何使用它. 本书一共包含 360 个例子.本书是超过 3,000 个新闻组讨论的精华部分, 当然也有很多的新脚本, 涵盖标准库的每个角落.

2018-10-23

深度学习中文-花书-无水印版

深度学习中文版本, 共计712页

2017-08-16

Python数据分析基础教程:NumPy学习指南(第2版) 图书+源码

NumPy学习最佳图书

2017-01-19

Python数据分析基础教程:NumPy学习指南(第二版)

使用 NumPy 进行数据分析的最好书籍

2017-01-19

PYTHON数据可视化编程实战高清完整.pdf版下载

应用 PYTHON 进行可视化的最佳图书

2017-01-19

sector.2.8.tar.gz

基于UDT的云计算系统 sector and sphere 的安装包

2013-07-09

Fast Paxos(pdf)

Abstract As used in practice, traditional consensus algorithms require three message delays before any process can learn the chosen value. Fast Paxos is an extension of the classic Paxos algorithm that allows the value to be learned in two message delays. How and why the algo-rithm works are explained informally, and a TLA+ speci¯cation of the algorithm appears as an appendix.

2013-07-09

黑客与画家(中文版)

硅谷著名创业者和天使投资人的久负盛名的文集,阅读本书是理解计算机编程本质的最佳途径。

2013-07-05

空空如也

TA创建的收藏夹 TA关注的收藏夹

TA关注的人

提示
确定要删除当前文章?
取消 删除