东方耀AI技术分享
标题:
03、模型构建之Generator生成器的封装_笔记
[打印本页]
作者:
东方耀
时间:
2019-4-25 13:59
标题:
03、模型构建之Generator生成器的封装_笔记
03、模型构建之Generator生成器的封装_笔记
def conv2d_transpose(inputs, out_channel, name, training, with_bn_relu=True):
with tf.variable_scope(name_or_scope=name):
conv2d_trans = tf.layers.conv2d_transpose(inputs=inputs, filters=out_channel,
kernel_size=[5, 5],
strides=[2, 2], padding='SAME')
if with_bn_relu:
bn = tf.layers.batch_normalization(conv2d_trans, training=training)
relu = tf.nn.relu(bn)
return relu
else:
return conv2d_trans
class Generator:
def __init__(self, channels, init_conv_size):
self._channels = channels
self._init_conv_size = init_conv_size
self._reuse = False
def __call__(self, inputs, training):
# 让类的实例化对象可以 函数一样调用
# eg: g = Generator(XXX) g(inputs, training)
inputs = tf.convert_to_tensor(inputs)
with tf.variable_scope(name_or_scope='generator', reuse=self._reuse):
with tf.variable_scope(name_or_scope='inputs_fc'):
# inputs shape: [N, 100]
fc = tf.layers.dense(inputs=inputs, units=self._init_conv_size * self._init_conv_size * self._channels[0])
conv0 = tf.reshape(fc, shape=[-1, self._init_conv_size, self._init_conv_size, self._channels[0]])
bn0 = tf.layers.batch_normalization(conv0, training=training)
relu0 = tf.nn.relu(bn0)
# shape: [N 4 4 128]
conv2d_trans_inputs = relu0
# g_channels=[128, 64, 32, 1],
# range(1, 4) ---> 1 2 3
for i in range(1, len(self._channels)):
if i == len(self._channels) - 1:
with_bn_relu = False
else:
with_bn_relu = True
conv2d_trans_inputs = conv2d_transpose(conv2d_trans_inputs,
self._channels[i],
'conv2d-trans-%d' % i,
training,
with_bn_relu)
# shape: [N 32 32 1]
image_inputs = conv2d_trans_inputs
with tf.variable_scope(name_or_scope='generator_image'):
# [-1 1]
imgs_outputs = tf.nn.tanh(image_inputs, name='imgs_outputs')
self._reuse = True
self.variables = tf.get_collection(key=tf.GraphKeys.TRAINABLE_VARIABLES, scope='generator')
# tf.trainable_variables
return imgs_outputs
复制代码
欢迎光临 东方耀AI技术分享 (http://www.ai111.vip/)
Powered by Discuz! X3.4