Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
135 views
in Technique[技术] by (71.8m points)

python - How to train categorical CNN?

I'm currently trying to train a model to do bird species recognization. This model will be later converted and hosted on Arduino nano 33 BLE near a place where birds come to eat.

To train my model I've used kaggle API to use the dataset that contains 250 species divided into a train, validation, test set. The images are .jpg 224x224 RGB. To ease data labelling I used Keras preprocessing tool that allow me to label data based on their folder, this works perfectly.

Here is the preprocessing :


    from tensorflow.keras.preprocessing.image import ImageDataGenerator
    
    # All images will be augmented
    train_datagen = ImageDataGenerator(
          rescale=1./255,
          rotation_range=40,
          width_shift_range=0.2,
          height_shift_range=0.2,
          shear_range=0.2,
          zoom_range=0.2,
          horizontal_flip=True,
          fill_mode='nearest')
    
    # Flow training images in batches of 128 using train_datagen generator
    train_generator = train_datagen.flow_from_directory(
            '/content/train',  # This is the source directory for training images
            target_size=(224, 224),  # All images will be resized to 150x150
            batch_size=128,
            class_mode='binary',
            color_mode='rgb',
            save_format='jpg')
    
    validation_datagen = ImageDataGenerator(rescale=1/255)
    
    validation_generator = validation_datagen.flow_from_directory(
            '/content/valid',
            target_size=(224, 224),
            class_mode='categorical',
            color_mode='rgb',
            save_format='jpg')

Then I have created a keras model with convolution and maxpooling to process my data and then I've used 2 hidden layers to use softmax activation. Here is my model code :


    import tensorflow as tf
    
    model = tf.keras.models.Sequential([
        tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(224, 224, 3)),
        tf.keras.layers.MaxPooling2D(2, 2),
        tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
        tf.keras.layers.MaxPooling2D(2,2),
        tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
        tf.keras.layers.MaxPooling2D(2,2),
        tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
        tf.keras.layers.MaxPooling2D(2,2),
        tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
        tf.keras.layers.MaxPooling2D(2,2),
        tf.keras.layers.Flatten(),
        tf.keras.layers.Dense(512, activation='relu'),
        tf.keras.layers.Dense(250, activation='softmax')
    ])

The error I'm facing is :

InvalidArgumentError                      Traceback (most recent call last)
<ipython-input-58-6a14ef1f8bcb> in <module>()
      4       epochs=15,
      5       verbose=1,
----> 6       validation_data=validation_generator)

6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     58     ctx.ensure_initialized()
     59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:
     62     if name is not None:

InvalidArgumentError:  Can not squeeze dim[1], expected a dimension of 1, got 250
     [[node Squeeze (defined at <ipython-input-58-6a14ef1f8bcb>:6) ]] [Op:__inference_test_function_3788]

Function call stack:
test_function

The repository of my project : https://github.com/BaptisteZloch/Birds-species-spotting

I hope someone could help me to solve this problem !

Regards, Baptiste ZLOCH

question from:https://stackoverflow.com/questions/65834579/how-to-train-categorical-cnn

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

I am the creator of the data set you are using. You really do not need much image augmentation as there are 35215 training images, 1250 test images(5 per species) and 1250 validation images(5 per species). So at the most I would only use horizontal_flip=True. All the rest will contribute little and increase processing time. This is a super clean data set where the bird region of interest is at least 50% of the pixels in the image. In your train_gen you should have target_size=(128,128) and class_mode='categorical'. Also you have save_format='jpg'. This parameter is ignored when you do not specify save_to_dir. It is good you did not specify it as when you train you would fill up that directory with TONS of images. In your model change input_shape=(150, 150, 3). In the code below I have added two callbacks early_stop, and rlronp . The first monitors validation loss and will halt training if the loss fails to decrease after 4 consecutive epochs. It saves the model with the weights for the epoch having lowest validation loss. The second monitors validation loss and reduces the learning rate by a factor of .5 if at the end of an epoch the loss failed to decrease. Documentation is here. Working code is shown below:

model.compile(Adam(lr=.001), loss='categorical_crossentropy', metrics=['accuracy']) 
train_dir=r'c:empirdsrain' # change this to point to your directory
valid_dir=r'c:empirdsvalid' # change this to point to your directory
test_dir=r'c:empirdsest'   # change this to point to your directory
train_gen=ImageDataGenerator(rescale=1/255, horizontal_flip=True).flow_from_directory( train_dir, target_size=(150, 150),
                            batch_size=32, seed=123,  class_mode='categorical', color_mode='rgb',shuffle=True) 
valid_gen=ImageDataGenerator(rescale=1/255).flow_from_directory( valid_dir, target_size=(150, 150),
                            batch_size=32, seed=123,  class_mode='categorical', color_mode='rgb',shuffle=False)
test_gen=ImageDataGenerator(rescale=1/255).flow_from_directory( test_dir, target_size=(150, 150),
                            batch_size=32, seed=123,  class_mode='categorical', color_mode='rgb',shuffle=False) 
early_stop=tf.keras.callbacks.EarlyStopping( monitor="val_loss", patience=4, verbose=1, restore_best_weights=True)
rlronp=tf.keras.callbacks.ReduceLROnPlateau(monitor="val_loss", factor=0.5,  patience=1, verbose=1)    
history=model.fit(x=train_gen,  epochs=30, verbose=1, callbacks=[early_stop, rlronp],  validation_data=valid_gen,
                       validation_steps=None,  shuffle=True)
performance=model.evaluate( test_gen, batch_size=32, verbose=1, steps=None, )[1] * 100
print('Model accuracy on test set is ', performance, ' %')

With 250 classes your model will not achieve to high a value of accuracy. The more classes there are the more difficult the problem gets. I would create a more complex model with more convolutional layers and perhaps an additional dense layer. If you add an additional dense layer include a dropout layer to prevent over fitting.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...