I'm searching now for some days a bug in my code. I want to write a TimeDitributed NN in python with keras to classify a sequence of images. Therefore I wrote this code:
def create_rgb_model(self):
x = Input(shape=((4,) + VIDEO_SHAPE + (3,)))
base_model = mobilenet.MobileNet(weights='imagenet', include_top=False, input_shape=VIDEO_SHAPE+(3,))
trainable = 9
for layer in base_model.layers[:-trainable]:
layer.trainable = False
for layer in base_model.layers[-trainable:]:
layer.trainable = True
som = TimeDistributed(base_model)(x)
som = TimeDistributed(GlobalMaxPooling2D())(som)
som = GRU(64)(som)
som = Dense(1024, activation='relu')(som)
som = Dropout(0.3)(som)
som = Dense(512, activation='relu')(som)
som = Dropout(0.3)(som)
som = Dense(128, activation='relu')(som)
som = Dropout(0.3)(som)
som = Dense(64, activation='relu')(som)
som = Dense(self.num_classes, activation='softmax')(som)
model = Model(inputs=x, outputs=som)
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'], run_eagerly=True)
return model
And I use keras.utils.Sequence
to feed fit()
with data. I tried different batch sizes, but even with batch size 1, Python returns the segmentation fault. Furthermore I tried to run my code on a GPU and a CPU.
question from:
https://stackoverflow.com/questions/65924408/timedistributed-layers-in-keras-produce-segmentation-fault 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…