Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
373 views
in Technique[技术] by (71.8m points)

numpy - Shape value doesn't match

I changed .mat input file and i got an error below I added .npy file generator.py and batch.generator.py code in last section. I changed the npy file to .mat and i print out .npy and .mat file. it was same result. So i think the problem is from valueError between npy code and batch.generator.py code.

D:
euraltalkimagernngeneric_batch_generator.py:141: FutureWarning: arrays to stack must be passed as a "sequence" type such as list or tuple. Support for non-sequence iterables such as generators is deprecated as of NumPy 1.16 and will raise an error in the future.
  F = np.row_stack(x['image']['feat'] for x in batch)
Traceback (most recent call last):
  File "predict_on_images.py", line 123, in <module>
    main(params)
  File "predict_on_images.py", line 79, in main
    Ys = BatchGenerator.predict([{'image':img}], model, checkpoint_params, **kwparams)
  File "D:
euraltalkimagernngeneric_batch_generator.py", line 144, in predict
    Xe = F.dot(We) + be # Xe becomes N x image_encoding_size
ValueError: shapes (1,3) and (4096,512) not aligned: 3 (dim 1) != 4096 (dim 0)

So i print each of two .mat files. first one (without error)

{'__header__': b'MATLAB 5.0 MAT-file, Platform: GLNXA64, Created on: Mon Jan 26 22:35:58 2015', '__version__': '1.0', '__globals__': [], 'feats': array([[ 4.743388  ,  0.767779  , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       [ 1.8808942 , -0.        ,  2.5954685 , ..., -0.        ,
        -0.        ,  1.2749501 ],
       [-0.        ,  0.47221863, -0.        , ..., -0.        ,
        -0.        , -0.        ],
       ...,
       [-0.        , -0.        ,  0.66008806, ..., -0.        ,
        -0.        , -0.        ],
       [ 0.6342027 ,  1.4374962 , -0.        , ..., -0.        ,
        -0.        ,  0.15309396],
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        ,  1.4712493 ]], dtype=float32)}

second one( error)

{'__header__': b'MATLAB 5.0 MAT-file Platform: nt, Created on: Mon Jan 11 01:36:24 2021', '__version__': '1.0', '__globals__': [], 'gene_features': array([[0.04952786, 0.07706442, 0.08203085, ..., 0.02015594, 0.01708626,
        0.01158991],
       [0.05283025, 0.06490523, 0.08657217, ..., 0.        , 0.00044708,
        0.04317199],
       [0.05222807, 0.0871853 , 0.09530147, ..., 0.02398774, 0.0577713 ,
        0.01362694]])}

How can i changed it? this is the code that i make npy. (i change npy file to mat in another section)

def feature_extraction_images(model, cores, batch_sz, image_list, output_path):
"""
  Function that extracts the intermediate CNN features
  of each image in a provided image list.

  Args:
    model: CNN network
    cores: CPU cores for the parallel video loading
    batch_sz: batch size fed to the CNN network
    image_list: list of image to extract features
    output_path: path to store video features
"""

images = [image.strip() for image in open(image_list).readlines()]
print('
Number of images: ', len(images))
print('Storage directory: ', output_path)
print('CPU cores: ', cores)
print('Batch size: ', batch_sz)

print('
Feature Extraction Process')
print('==========================')
pool = Pool(cores)
batches = len(images) // batch_sz + 1
features = np.zeros((len(images), model.final_sz))
for batch in tqdm(lrange(batches), mininterval=1.0, unit='batches'):

    # load images in parallel
    future = []
    for image in images[batch * batch_sz: (batch+1) * batch_sz]:
        future += [pool.apply_async(load_image, args=[image, model.desired_size])]

    image_tensor = []
    for f in future:
        image_tensor += [f.get()]

    # extract features
    features[int(batch * batch_sz): int((batch + 1) * batch_sz)] = 
        model.extract(np.array(image_tensor), batch_sz)

# save features
np.save(os.path.join(output_path, '{}_features'.format(model.net_name)), features)

this is code of generic_batch_generator.py

  @staticmethod
  def init(params, misc):

    # inputs
    image_encoding_size = params.get('image_encoding_size', 128)
    word_encoding_size = params.get('word_encoding_size', 128)
    hidden_size = params.get('hidden_size', 128)
    generator = params.get('generator', 'lstm')
    vocabulary_size = len(misc['wordtoix'])
    output_size = len(misc['ixtoword']) # these should match though
    image_size = 4096 # size of CNN vectors hardcoded here

    if generator == 'lstm':
      assert image_encoding_size == word_encoding_size, 'this implementation does not support different sizes for these parameters'

    # initialize the encoder models
    model = {}
    model['We'] = initw(image_size, image_encoding_size) # image encoder
    model['be'] = np.zeros((1,image_encoding_size))
    model['Ws'] = initw(vocabulary_size, word_encoding_size) # word encoder
    update = ['We', 'be', 'Ws']
    regularize = ['We', 'Ws']
    init_struct = { 'model' : model, 'update' : update, 'regularize' : regularize}

    # descend into the specific Generator and initialize it
    Generator = decodeGenerator(generator)
    generator_init_struct = Generator.init(word_encoding_size, hidden_size, output_size)
    merge_init_structs(init_struct, generator_init_struct)
    return init_struct

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)
等待大神答复

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...