视觉注意力模型实现
标签:
•
文件类型: .zip
•
文件大小: 40.25MB
•
下载次数: 1
•
本项目实现了视觉注意力区域的提取和检测,里面包含了详细的代码注释,算法解释,对实现很有帮助
代码片段和文件信息
属性 大小 日期 时间 名称
----------- --------- ---------- ----- ----
目录 0 2019-01-02 21:02 spatial_transformer(注意力模型)
目录 0 2018-01-24 13:13 spatial_transformer(注意力模型).ipynb_checkpoints
文件 132831 2018-01-24 13:13 spatial_transformer(注意力模型).ipynb_checkpointsAttention实例-Spatial Transformer-checkpoint.ipynb
文件 89843 2019-01-02 21:02 spatial_transformer(注意力模型)Attention实例-Spatial Transformer.ipynb
文件 43046126 2018-01-24 13:14 spatial_transformer(注意力模型)mnist_cluttered_60x60_6distortions.npz
文件 6851 2018-01-24 13:13 spatial_transformer(注意力模型)spatial_transformer.py
文件 182198 2018-01-24 13:14 spatial_transformer(注意力模型)st_cnn.png
目录 0 2018-01-24 13:13 spatial_transformer(注意力模型)\__pycache__
文件 5093 2018-01-24 13:13 spatial_transformer(注意力模型)\__pycache__spatial_transformer.cpython-36.pyc
from keras.layers.core import layer
import tensorflow as tf
class SpatialTransformer(layer):
“““Spatial Transformer layer
Implements a spatial transformer layer as described in [1]_.
Borrowed from [2]_:
downsample_fator : float
A value of 1 will keep the orignal size of the image.
Values larger than 1 will down sample the image. Values below 1 will
upsample the image.
example image: height= 100 width = 200
downsample_factor = 2
output image will then be 50 100
References
----------
.. [1] Spatial Transformer Networks
Max Jaderberg Karen Simonyan Andrew Zisserman Koray Kavukcuoglu
Submitted on 5 Jun 2015
.. [2] https://github.com/skaae/transformer_network/blob/master/transformerlayer.py
.. [3] https://github.com/EderSantana/seya/blob/keras1/seya/layers/attention.py
“““
def __init__(self
localization_net
output_size
**kwargs):
self.locnet = localization_net
self.output_size = output_size
super(SpatialTransformer self).__init__(**kwargs)
def build(self input_shape):
self.locnet.build(input_shape)
self.trainable_weights = self.locnet.trainable_weights
#self.regularizers = self.locnet.regularizers //NOT SUER ABOUT THIS THERE IS NO MORE SUCH PARAMETR AT self.locnet
self.constraints = self.locnet.constraints
def compute_output_shape(self input_shape):
output_size = self.output_size
return (None
int(output_size[0])
int(output_size[1])
int(input_shape[-1]))
def call(self X mask=None):
affine_transformation = self.locnet.call(X)
output = self._transform(affine_transformation X self.output_size)
return output
def _repeat(self x num_repeats):
ones = tf.ones((1 num_repeats) dtype=‘int32‘)
x = tf.reshape(x shape=(-11))
x = tf.matmul(x ones)
return tf.reshape(x [-1])
def _interpolate(self image x y output_size):
batch_size = tf.shape(image)[0]
height = tf.shape(image)[1]
width = tf.shape(image)[2]
num_channels = tf.shape(image)[3]
x = tf.cast(x dtype=‘float32‘)
y = tf.cast(y dtype=‘float32‘)
height_float = tf.cast(height dtype=‘float32‘)
width_float = tf.cast(width dtype=‘float32‘)
output_height = output_size[0]
output_width = output_size[1]
x = .5*(x + 1.0)*(width_float)
y = .5*(y + 1.0)*(height_float)
x0 = tf.cast(tf.floor(x) ‘int32‘)
x1 = x0 + 1
y0 = tf.cast(tf.floor(y) ‘int32‘)
y1 = y0 + 1
max_y = tf.cast(height - 1 dtype=‘int32‘)
max_x = tf.cast(width - 1 dtype=‘int32‘)
zero = tf.zeros([] dtype=‘int32‘)
x0 = tf.clip_by_value(x0 zero max_x)
x1 = tf.clip_by_value(x1 zero m
属性 大小 日期 时间 名称
----------- --------- ---------- ----- ----
目录 0 2019-01-02 21:02 spatial_transformer(注意力模型)
目录 0 2018-01-24 13:13 spatial_transformer(注意力模型).ipynb_checkpoints
文件 132831 2018-01-24 13:13 spatial_transformer(注意力模型).ipynb_checkpointsAttention实例-Spatial Transformer-checkpoint.ipynb
文件 89843 2019-01-02 21:02 spatial_transformer(注意力模型)Attention实例-Spatial Transformer.ipynb
文件 43046126 2018-01-24 13:14 spatial_transformer(注意力模型)mnist_cluttered_60x60_6distortions.npz
文件 6851 2018-01-24 13:13 spatial_transformer(注意力模型)spatial_transformer.py
文件 182198 2018-01-24 13:14 spatial_transformer(注意力模型)st_cnn.png
目录 0 2018-01-24 13:13 spatial_transformer(注意力模型)\__pycache__
文件 5093 2018-01-24 13:13 spatial_transformer(注意力模型)\__pycache__spatial_transformer.cpython-36.pyc
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件举报,一经查实,本站将立刻删除。
评论列表(条)