PVANet中的改进后的CReLU的caffe实现
https://github.com/sanghoon/pva-faster-rcnn/blob/master/models/pvanet/pva9.1/faster_rcnn_train_test_21cls.ptlayer { name: "conv1_1/conv" type: "Convolution" bottom: "data" top: "conv1_1/
https://github.com/sanghoon/pva-faster-rcnn/blob/master/models/pvanet/pva9.1/faster_rcnn_train_test_21cls.pt
layer {
name: "conv1_1/conv"
type: "Convolution"
bottom: "data"
top: "conv1_1/conv"
param {
lr_mult: 1.0
decay_mult: 1.0
}
convolution_param {
num_output: 16
bias_term: false
weight_filler {
type: "xavier"
}
pad_h: 3
pad_w: 3
kernel_h: 7
kernel_w: 7
stride_h: 2
stride_w: 2
}
}
layer {
name: "conv1_1/bn"
type: "BatchNorm"
bottom: "conv1_1/conv"
top: "conv1_1/conv"
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
param {
lr_mult: 0
decay_mult: 0
}
batch_norm_param {
use_global_stats: true
}
}
layer {
name: "conv1_1/neg"
type: "Power"
bottom: "conv1_1/conv"
top: "conv1_1/neg"
power_param {
power: 1
scale: -1.0
shift: 0
}
}
layer {
name: "conv1_1/concat"
type: "Concat"
bottom: "conv1_1/conv"
bottom: "conv1_1/neg"
top: "conv1_1"
}
layer {
name: "conv1_1/scale"
type: "Scale"
bottom: "conv1_1"
top: "conv1_1"
param {
lr_mult: 1.0
decay_mult: 0
}
param {
lr_mult: 2.0
decay_mult: 0
}
scale_param {
bias_term: true
}
}
layer {
name: "conv1_1/relu"
type: "ReLU"
bottom: "conv1_1"
top: "conv1_1"
}
开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!
更多推荐
所有评论(0)