Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
1.5k views
in Technique[技术] by (71.8m points)

julia - Julia MethodError:没有方法匹配(:: Dense {typeof(logistic),CuArray {Float32,2,Nothing},CuArray {Float32,1,Nothing}})(:: Float32)(Julia MethodError: no method matching (::Dense{typeof(logistic),CuArray{Float32,2,Nothing},CuArray{Float32,1,Nothing}})(::Float32))

I have the following training data in CuArrays. (我在CuArrays中有以下训练数据。)

X: 300×8544 CuArray{Float32,2,Nothing}
y: 5×8544 Flux.OneHotMatrix{CuArray{Flux.OneHotVector,1,Nothing}}

and I have the following model I want to train: (我想训练以下模型:)

# define activation
logistic(x) = 1. / (1 .+ exp.(-x))

# first define a 2-layer MLP model
model = Chain(Dense(300, 64, logistic),
          Dense(64, c),
          softmax) |> gpu

# define the loss
loss(x, y) = Flux.crossentropy(model(x), y)

# define the optimiser
optimiser = ADAM()

but if I do (但是如果我这样做)

Flux.train!(loss, params(model), zip(X, y), optimiser)

I get the following error: (我收到以下错误:)

MethodError: no method matching (::Dense{typeof(logistic),CuArray{Float32,2,Nothing},CuArray{Float32,1,Nothing}})(::Float32)

How should I resolve this? (我该如何解决?)

  ask by D.Danier translate from so

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

@D.Danier Please provide minimal working examples (MWE), that means complete code that people can copy and paste and run. (@ D.Danier请提供最少的工作示例(MWE),这意味着人们可以复制,粘贴和运行的完整代码。) Below is an example (下面是一个例子)

#Pkg.activate("c:/scratch/flux-test")

using CuArrays, Flux
CuArrays.allowscalar(false)

# define activation
# you don't the broadcast dots
logistic(x) = 1 / (1 + exp(-x))

# ensure your code works on GPU
CuArrays.@cufunc logistic(x) = 1 / (1 + exp(-x))

X = cu(rand(300, 8544))
y = cu(rand(5, 8544))
c = 5

# first define a 2-layer MLP model
model = Chain(Dense(300, 64, logistic),
          Dense(64, c),
          softmax) |> gpu

# define the loss
loss(x, y) = Flux.crossentropy(model(x), y) |> gpu

model(X)

# define the optimiser
optimiser = ADAM()

loss(X, y)

Flux.train!(loss, params(model), zip(eachcol(X), eachcol(y)), optimiser)

When you Flux.train! (当您Flux.train!) , you must tell Flux that you want to pair up the columns of X and y to compute the loss. (,您必须告诉Flux您想要将Xy列配对以计算损耗。) BTW, this is probably less than ideal as it's computing too many iterations. (顺便说一句,这可能不理想,因为它计算了太多的迭代。) You may want to group them up into mini-batches. (您可能需要将它们分组为迷你批。) Or if your problem is genuinely this small, then you can want to compute the whole thing in one go eg (或者,如果您的问题确实很小,那么您可以一次性计算整个事情,例如)

Flux.train!(loss, params(model), (X, y), optimiser)

which basically say compute the loss based on the whole of X and y . (基本上说是根据Xy的全部来计算损耗。)


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...