Skip to content

Commit 701d4d3

Browse files
2winsJonathan DEKHTIAR
authored and
Jonathan DEKHTIAR
committed
Issue #642 - Add AtrousDeConv2dLayer (#662)
* Update visualize.py * Update README.md Add an example for Adversarial Learning * Update more.rst Update the URLs * Update more.rst * Update example.rst Add the same link of BEGAN implementation. * Update example.rst * Update example.rst * Create tutorial_tfslim.py fixes #552 * Update tutorial_tfslim.py * Update utils.py Fix #565 * Update utils.py * Create test_utils_predict.py related with #288, #565, #566 * Create test_utils_predict.py * Update utils.py * Update test_utils_predict.py * Update CHANGELOG.md related to #566 * Update test_utils_predict.py * Update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md * Update test_utils_predict.py * Update test_utils_predict.py * Update test_utils_predict.py * Update test_utils_predict.py * Update test_utils_predict.py * Update test_utils_predict.py (fix Bad Coding Style) * Update test_utils_predict.py * Update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md * Update CHANGELOG.md * Update convolution.py (Add AtrousConv2dTransLayer) * Add AtrousConv2dTransLayer * Fix some mistakes * Follow protocols * Fix coding style (yapf) * AtrousConv2dLayer fixed * AtrousConv2dTransposeLayer refactored * Fix coding style (yapf) * Fix error * Bias Add using premade tf func * Old TF Code Removed * Renamed to AtrousDeConv2dLayer * Update CHANGELOG.md * Release 1.8.6rc2 * Documentation Fix
1 parent bfffb58 commit 701d4d3

27 files changed

+312
-289
lines changed

.circleci/config.yml

+33-9
Original file line numberDiff line numberDiff line change
@@ -217,7 +217,7 @@ jobs:
217217
CONTAINER_TAG="$VERSION_PREFIX"-gpu-py3
218218
docker tag latest_py3_gpu:latest tensorlayer/tensorlayer:"$CONTAINER_TAG"
219219
docker push tensorlayer/tensorlayer:"$CONTAINER_TAG"
220-
220+
221221
CONTAINER_TAG="$VERSION_PREFIX"-py3-gpu
222222
docker tag latest_py3_gpu:latest tensorlayer/tensorlayer:"$CONTAINER_TAG"
223223
docker push tensorlayer/tensorlayer:"$CONTAINER_TAG"
@@ -411,7 +411,7 @@ jobs:
411411
CONTAINER_TAG="$TL_VERSION"-cpu-py3
412412
docker tag latest_py3_cpu:latest tensorlayer/tensorlayer:"$CONTAINER_TAG"
413413
docker push tensorlayer/tensorlayer:"$CONTAINER_TAG"
414-
414+
415415
CONTAINER_TAG="$TL_VERSION"-py3-cpu
416416
docker tag latest_py3_cpu:latest tensorlayer/tensorlayer:"$CONTAINER_TAG"
417417
docker push tensorlayer/tensorlayer:"$CONTAINER_TAG"
@@ -461,11 +461,26 @@ jobs:
461461
CONTAINER_TAG="$TL_VERSION"-gpu-py3
462462
docker tag latest_py3_gpu:latest tensorlayer/tensorlayer:"$CONTAINER_TAG"
463463
docker push tensorlayer/tensorlayer:"$CONTAINER_TAG"
464-
464+
465465
CONTAINER_TAG="$TL_VERSION"-py3-gpu
466466
docker tag latest_py3_gpu:latest tensorlayer/tensorlayer:"$CONTAINER_TAG"
467467
docker push tensorlayer/tensorlayer:"$CONTAINER_TAG"
468468
469+
init_tag_build:
470+
working_directory: ~/build
471+
docker:
472+
- image: docker:git
473+
steps:
474+
- checkout
475+
- setup_remote_docker:
476+
reusable: true
477+
exclusive: true
478+
479+
- run:
480+
name: Init Tag Deploy Build
481+
command: |
482+
echo "start tag workflow"
483+
469484
###################################################################################
470485
###################################################################################
471486
###################################################################################
@@ -546,12 +561,21 @@ workflows:
546561
###################################################################################
547562
# TAGS BUILDS with TensorLayer installed from PyPI #
548563
###################################################################################
549-
564+
565+
- init_tag_build:
566+
filters:
567+
tags:
568+
only: /\d+\.\d+(\.\d+)?(\S*)?$/
569+
branches:
570+
ignore: /.*/
571+
550572
- hold:
551573
type: approval
574+
requires:
575+
- init_tag_build
552576
filters:
553577
tags:
554-
only: /.*/
578+
only: /\d+\.\d+(\.\d+)?(\S*)?$/
555579
branches:
556580
ignore: /.*/
557581

@@ -560,7 +584,7 @@ workflows:
560584
- hold
561585
filters:
562586
tags:
563-
only: /.*/
587+
only: /\d+\.\d+(\.\d+)?(\S*)?$/
564588
branches:
565589
ignore: /.*/
566590

@@ -569,7 +593,7 @@ workflows:
569593
- hold
570594
filters:
571595
tags:
572-
only: /.*/
596+
only: /\d+\.\d+(\.\d+)?(\S*)?$/
573597
branches:
574598
ignore: /.*/
575599

@@ -578,7 +602,7 @@ workflows:
578602
- hold
579603
filters:
580604
tags:
581-
only: /.*/
605+
only: /\d+\.\d+(\.\d+)?(\S*)?$/
582606
branches:
583607
ignore: /.*/
584608

@@ -587,6 +611,6 @@ workflows:
587611
- hold
588612
filters:
589613
tags:
590-
only: /.*/
614+
only: /\d+\.\d+(\.\d+)?(\S*)?$/
591615
branches:
592616
ignore: /.*/

CHANGELOG.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,7 @@ To release a new version, please update the changelog as followed:
9494
- `AMSGrad` added on Optimizers page added (by @DEKHTIARJonathan in #636)
9595
- Layer:
9696
- ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)
97+
- AtrousDeConv2dLayer added (by @2wins in #662)
9798
- Optimizer:
9899
- AMSGrad Optimizer added based on `On the Convergence of Adam and Beyond (ICLR 2018)` (by @DEKHTIARJonathan in #636)
99100
- Setup:
@@ -291,6 +292,6 @@ To release a new version, please update the changelog as followed:
291292
### Contributors
292293
@zsdonghao @luomai @DEKHTIARJonathan
293294

294-
[Unreleased]: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...master
295-
[1.8.6]: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...1.8.5
295+
[Unreleased]: https://github.com/tensorlayer/tensorlayer/compare/1.8.5...master
296+
[1.8.6]: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc2...1.8.5
296297
[1.8.5]: https://github.com/tensorlayer/tensorlayer/compare/1.8.4...1.8.5

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
[![Chinese Book](https://img.shields.io/badge/book-中文-blue.svg)](http://www.broadview.com.cn/book/5059/)
1212

1313
[![PyPI version](https://badge.fury.io/py/tensorlayer.svg)](https://pypi.org/project/tensorlayer/)
14-
[![Github commits (since latest release)](https://img.shields.io/github/commits-since/tensorlayer/tensorlayer/latest.svg)](https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...master)
14+
[![Github commits (since latest release)](https://img.shields.io/github/commits-since/tensorlayer/tensorlayer/latest.svg)](https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc2...master)
1515
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/tensorlayer.svg)](https://pypi.org/project/tensorlayer/)
1616
[![Supported TF Version](https://img.shields.io/badge/tensorflow-1.6.0+-blue.svg)](https://github.com/tensorflow/tensorflow/releases)
1717

README.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
:target: https://pypi.org/project/tensorlayer/
1414

1515
.. image:: https://img.shields.io/github/commits-since/tensorlayer/tensorlayer/latest.svg
16-
:target: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc1...master
16+
:target: https://github.com/tensorlayer/tensorlayer/compare/1.8.6rc2...master
1717

1818
.. image:: https://img.shields.io/pypi/pyversions/tensorlayer.svg
1919
:target: https://pypi.org/project/tensorlayer/

docs/modules/layers.rst

+5
Original file line numberDiff line numberDiff line change
@@ -254,6 +254,7 @@ Layer list
254254
DownSampling2dLayer
255255
AtrousConv1dLayer
256256
AtrousConv2dLayer
257+
AtrousDeConv2dLayer
257258

258259
Conv1d
259260
Conv2d
@@ -476,6 +477,10 @@ Convolutional layer (Pro)
476477
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
477478
.. autoclass:: AtrousConv2dLayer
478479

480+
2D Atrous transposed convolution
481+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
482+
.. autoclass:: AtrousDeConv2dLayer
483+
479484

480485
Convolutional layer (Simplified)
481486
-----------------------------------

example/tutorial_binarynet_cifar10_tfrecord.py

+8-8
Original file line numberDiff line numberDiff line change
@@ -108,26 +108,26 @@ def read_and_decode(filename, is_train=None):
108108
if is_train ==True:
109109
# 1. Randomly crop a [height, width] section of the image.
110110
img = tf.random_crop(img, [24, 24, 3])
111+
111112
# 2. Randomly flip the image horizontally.
112113
img = tf.image.random_flip_left_right(img)
114+
113115
# 3. Randomly change brightness.
114116
img = tf.image.random_brightness(img, max_delta=63)
117+
115118
# 4. Randomly change contrast.
116119
img = tf.image.random_contrast(img, lower=0.2, upper=1.8)
120+
117121
# 5. Subtract off the mean and divide by the variance of the pixels.
118-
try: # TF 0.12+
119-
img = tf.image.per_image_standardization(img)
120-
except Exception: # earlier TF versions
121-
img = tf.image.per_image_whitening(img)
122+
img = tf.image.per_image_standardization(img)
122123

123124
elif is_train == False:
124125
# 1. Crop the central [height, width] of the image.
125126
img = tf.image.resize_image_with_crop_or_pad(img, 24, 24)
127+
126128
# 2. Subtract off the mean and divide by the variance of the pixels.
127-
try: # TF 0.12+
128-
img = tf.image.per_image_standardization(img)
129-
except Exception: # earlier TF versions
130-
img = tf.image.per_image_whitening(img)
129+
img = tf.image.per_image_standardization(img)
130+
131131
elif is_train == None:
132132
img = img
133133

example/tutorial_cifar10_tfrecord.py

+8-8
Original file line numberDiff line numberDiff line change
@@ -109,26 +109,26 @@ def read_and_decode(filename, is_train=None):
109109
if is_train ==True:
110110
# 1. Randomly crop a [height, width] section of the image.
111111
img = tf.random_crop(img, [24, 24, 3])
112+
112113
# 2. Randomly flip the image horizontally.
113114
img = tf.image.random_flip_left_right(img)
115+
114116
# 3. Randomly change brightness.
115117
img = tf.image.random_brightness(img, max_delta=63)
118+
116119
# 4. Randomly change contrast.
117120
img = tf.image.random_contrast(img, lower=0.2, upper=1.8)
121+
118122
# 5. Subtract off the mean and divide by the variance of the pixels.
119-
try: # TF 0.12+
120-
img = tf.image.per_image_standardization(img)
121-
except Exception: # earlier TF versions
122-
img = tf.image.per_image_whitening(img)
123+
img = tf.image.per_image_standardization(img)
123124

124125
elif is_train == False:
125126
# 1. Crop the central [height, width] of the image.
126127
img = tf.image.resize_image_with_crop_or_pad(img, 24, 24)
128+
127129
# 2. Subtract off the mean and divide by the variance of the pixels.
128-
try: # TF 0.12+
129-
img = tf.image.per_image_standardization(img)
130-
except Exception: # earlier TF versions
131-
img = tf.image.per_image_whitening(img)
130+
img = tf.image.per_image_standardization(img)
131+
132132
elif is_train == None:
133133
img = img
134134

example/tutorial_dorefanet_cifar10_tfrecord.py

+8-8
Original file line numberDiff line numberDiff line change
@@ -108,26 +108,26 @@ def read_and_decode(filename, is_train=None):
108108
if is_train ==True:
109109
# 1. Randomly crop a [height, width] section of the image.
110110
img = tf.random_crop(img, [24, 24, 3])
111+
111112
# 2. Randomly flip the image horizontally.
112113
img = tf.image.random_flip_left_right(img)
114+
113115
# 3. Randomly change brightness.
114116
img = tf.image.random_brightness(img, max_delta=63)
117+
115118
# 4. Randomly change contrast.
116119
img = tf.image.random_contrast(img, lower=0.2, upper=1.8)
120+
117121
# 5. Subtract off the mean and divide by the variance of the pixels.
118-
try: # TF 0.12+
119-
img = tf.image.per_image_standardization(img)
120-
except Exception: # earlier TF versions
121-
img = tf.image.per_image_whitening(img)
122+
img = tf.image.per_image_standardization(img)
122123

123124
elif is_train == False:
124125
# 1. Crop the central [height, width] of the image.
125126
img = tf.image.resize_image_with_crop_or_pad(img, 24, 24)
127+
126128
# 2. Subtract off the mean and divide by the variance of the pixels.
127-
try: # TF 0.12+
128-
img = tf.image.per_image_standardization(img)
129-
except Exception: # earlier TF versions
130-
img = tf.image.per_image_whitening(img)
129+
img = tf.image.per_image_standardization(img)
130+
131131
elif is_train == None:
132132
img = img
133133

example/tutorial_inceptionV3_tfslim.py

+1-4
Original file line numberDiff line numberDiff line change
@@ -129,10 +129,7 @@ def print_prob(prob):
129129
"Please download inception_v3 ckpt from https://github.com/tensorflow/models/tree/master/research/slim"
130130
)
131131

132-
try: # TF12+
133-
saver.restore(sess, MODEL_PATH)
134-
except Exception: # TF11
135-
saver.restore(sess, MODEL_PATH)
132+
saver.restore(sess, MODEL_PATH)
136133
print("Model Restored")
137134

138135
y = network.outputs

example/tutorial_ptb_lstm.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -258,8 +258,9 @@ def loss_fn(outputs, targets): #, batch_size, num_steps):
258258
# n_examples = batch_size * num_steps
259259
# so
260260
# cost is the averaged cost of each mini-batch (concurrent process).
261-
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example( # loss = tf.nn.seq2seq.sequence_loss_by_example( # TF0.12
262-
[outputs], [tf.reshape(targets, [-1])], [tf.ones_like(tf.reshape(targets, [-1]), dtype=tf.float32)])
261+
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
262+
[outputs], [tf.reshape(targets, [-1])], [tf.ones_like(tf.reshape(targets, [-1]), dtype=tf.float32)]
263+
)
263264
# [tf.ones([batch_size * num_steps])])
264265
cost = tf.reduce_sum(loss) / batch_size
265266
return cost

example/tutorial_ptb_lstm_state_is_tuple.py

+3-2
Original file line numberDiff line numberDiff line change
@@ -264,8 +264,9 @@ def loss_fn(outputs, targets, batch_size):
264264
# n_examples = batch_size * num_steps
265265
# so
266266
# cost is the averaged cost of each mini-batch (concurrent process).
267-
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example( # loss = tf.nn.seq2seq.sequence_loss_by_example( # TF0.12
268-
[outputs], [tf.reshape(targets, [-1])], [tf.ones_like(tf.reshape(targets, [-1]), dtype=tf.float32)])
267+
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
268+
[outputs], [tf.reshape(targets, [-1])], [tf.ones_like(tf.reshape(targets, [-1]), dtype=tf.float32)]
269+
)
269270
# [tf.ones([batch_size * num_steps])])
270271
cost = tf.reduce_sum(loss) / batch_size
271272
return cost

example/tutorial_ternaryweight_cifar10_tfrecord.py

+8-8
Original file line numberDiff line numberDiff line change
@@ -107,26 +107,26 @@ def read_and_decode(filename, is_train=None):
107107
if is_train ==True:
108108
# 1. Randomly crop a [height, width] section of the image.
109109
img = tf.random_crop(img, [24, 24, 3])
110+
110111
# 2. Randomly flip the image horizontally.
111112
img = tf.image.random_flip_left_right(img)
113+
112114
# 3. Randomly change brightness.
113115
img = tf.image.random_brightness(img, max_delta=63)
116+
114117
# 4. Randomly change contrast.
115118
img = tf.image.random_contrast(img, lower=0.2, upper=1.8)
119+
116120
# 5. Subtract off the mean and divide by the variance of the pixels.
117-
try: # TF 0.12+
118-
img = tf.image.per_image_standardization(img)
119-
except Exception: # earlier TF versions
120-
img = tf.image.per_image_whitening(img)
121+
img = tf.image.per_image_standardization(img)
121122

122123
elif is_train == False:
123124
# 1. Crop the central [height, width] of the image.
124125
img = tf.image.resize_image_with_crop_or_pad(img, 24, 24)
126+
125127
# 2. Subtract off the mean and divide by the variance of the pixels.
126-
try: # TF 0.12+
127-
img = tf.image.per_image_standardization(img)
128-
except Exception: # earlier TF versions
129-
img = tf.image.per_image_whitening(img)
128+
img = tf.image.per_image_standardization(img)
129+
130130
elif is_train == None:
131131
img = img
132132

0 commit comments

Comments
 (0)