Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
280 views
in Technique[技术] by (71.8m points)

python - unable to train on complete batch in tensorflow in google collab

I was trying to train a convolutional neural network to classify cats vs dogs in google collab here is the code I am using:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.callbacks import TensorBoard
import pickle
import time

Name = 'Cat-vs-Dogs-cnn-64x3-sigmoid-w-o-dense-v3' + str((int(time.time())))
di =  "C:\Users\naval\neural_network\cats_dogs\logs" + Name

tensorboard = TensorBoard(log_dir=di)

DML_VISIBLE_DEVICES="1,0"
x = pickle.load(open("/content/drive/MyDrive/Colab Notebooks/data/x.pickle",'rb'))
y = pickle.load(open("//content//drive//MyDrive//Colab Notebooks//data//y.pickle",'rb'))
print(x.shape)
x = x.reshape(-1,80,80,1)
x = x/255.0

modle = Sequential()
modle.add(layers.Conv2D(64,(3,3),input_shape = x.shape[1:]))
modle.add(layers.Activation ('relu'))
modle.add(layers.MaxPooling2D(pool_size=(2,2)))

modle.add(layers.Conv2D(128,(3,3),input_shape = x.shape[1:]))
modle.add(layers.Activation ('relu'))
modle.add(layers.MaxPooling2D(pool_size=(2,2)))

modle.add(layers.Conv2D(64,(3,3)))
modle.add(layers.Activation('relu'))
modle.add(layers.MaxPooling2D(pool_size=(2,2)))

modle.add(layers.Flatten())

modle.add(layers.Dense(1))
modle.add(layers.Activation('sigmoid'))

modle.compile(optimizer = keras.optimizers.Adam(),
             loss = keras.losses.BinaryCrossentropy(),
             metrics = ['accuracy'])

modle.fit(x,y,batch_size=32,epochs = 9,validation_split=0.3,callbacks = [tensorboard])

the code works fine while using it on my own machine here is the screenshot of its training on my local device:enter image description here

the dimensions of x and y are as follows:

tf.shape(x) = tf.Tensor([24946    80    80     1], shape=(4,), dtype=int32) 
tf.shape(y) = tf.Tensor([24946], shape=(1,), dtype=int32)

obviously my gpu isn't powerful enough so I shifted my code to google collab and here is snapshot of its training in collab with gpu acceleration enabled: enter image description here

the problem is that on my local computer neural network is training on complete data set while on collab it is only training on (complete dataset / batch_size). Any fixes for this problem?? also training on my collab is very slow as compared to other people using it

Also I am using tensorflow-directml on AMD readon-RX-5500M is this is useful..


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
等待大神答复

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...