Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
606 views
in Technique[技术] by (71.8m points)

recursion - Explanation of lists:fold function

I learning more and more about Erlang language and have recently faced some problem. I read about foldl(Fun, Acc0, List) -> Acc1 function. I used learnyousomeerlang.com tutorial and there was an example (example is about Reverse Polish Notation Calculator in Erlang):

%function that deletes all whitspaces and also execute
rpn(L) when is_list(L) ->
  [Res] = lists:foldl(fun rpn/2, [], string:tokens(L," ")),
  Res.

%function that converts string to integer or floating poitn value
read(N) ->
  case string:to_float(N) of
    %returning {error, no_float} where there is no float avaiable
    {error,no_float} -> list_to_integer(N);
    {F,_} -> F
  end.

%rpn managing all actions
rpn("+",[N1,N2|S]) -> [N2+N1|S];
rpn("-", [N1,N2|S]) -> [N2-N1|S];
rpn("*", [N1,N2|S]) -> [N2*N1|S];
rpn("/", [N1,N2|S]) -> [N2/N1|S];
rpn("^", [N1,N2|S]) -> [math:pow(N2,N1)|S];
rpn("ln", [N|S])    -> [math:log(N)|S];
rpn("log10", [N|S]) -> [math:log10(N)|S];
rpn(X, Stack) -> [read(X) | Stack].

As far as I understand lists:foldl executes rpn/2 on every element on list. But this is as far as I can understand this function. I read the documentation but it does not help me a lot. Can someone explain me how lists:foldl works?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Let's say we want to add a list of numbers together:

1 + 2 + 3 + 4.

This is a pretty normal way to write it. But I wrote "add a list of numbers together", not "write numbers with pluses between them". There is something fundamentally different between the way I expressed the operation in prose and the mathematical notation I used. We do this because we know it is an equivalent notation for addition (because it is commutative), and in our heads it reduces immediately to:

3 + 7.

and then

10.

So what's the big deal? The problem is that we have no way of understanding the idea of summation from this example. What if instead I had written "Start with 0, then take one element from the list at a time and add it to the starting value as a running sum"? This is actually what summation is about, and it's not arbitrarily deciding which two things to add first until the equation is reduced.

sum(List) -> sum(List, 0).

sum([], A)    -> A;
sum([H|T], A) -> sum(T, H + A).

If you're with me so far, then you're ready to understand folds.

There is a problem with the function above; it is too specific. It braids three ideas together without specifying any independently:

  • iteration
  • accumulation
  • addition

It is easy to miss the difference between iteration and accumulation because most of the time we never give this a second thought. Most languages accidentally encourage us to miss the difference, actually, by having the same storage location change its value each iteration of a similar function.

It is easy to miss the independence of addition merely because of the way it is written in this example because "+" looks like an "operation", not a function.

What if I had said "Start with 1, then take one element from the list at a time and multiply it by the running value"? We would still be doing the list processing in exactly the same way, but with two examples to compare it is pretty clear that multiplication and addition are the only difference between the two:

prod(List) -> prod(List, 1).

prod([], A)    -> A;
prod([H|T], A) -> prod(T, H * A).

This is exactly the same flow of execution but for the inner operation and the starting value of the accumulator.

So let's make the addition and multiplication bits into functions, so we can pull that part of the pattern out:

add(A, B)  -> A + B.
mult(A, B) -> A * B.

How could we write the list operation on its own? We need to pass a function in -- addition or multiplication -- and have it operate over the values. Also, we have to pay attention to the identity of the type and operation of things we are operating on or else we will screw up the magic that is value aggregation. "add(0, X)" always returns X, so this idea (0 + Foo) is the addition identity operation. In multiplication the identity operation is to multiply by 1. So we must start our accumulator at 0 for addition and 1 for multiplication (and for building lists an empty list, and so on). So we can't write the function with an accumulator value built-in, because it will only be correct for some type+operation pairs.

So this means to write a fold we need to have a list argument, a function to do things argument, and an accumulator argument, like so:

fold([], _, Accumulator) ->
    Accumulator;
fold([H|T], Operation, Accumulator) ->
    fold(T, Operation, Operation(H, Accumulator)).

With this definition we can now write sum/1 using this more general pattern:

fsum(List) -> fold(List, fun add/2, 0).

And prod/1 also:

fprod(List) -> fold(List, fun prod/2, 1).

And they are functionally identical to the one we wrote above, but the notation is more clear and we don't have to write a bunch of recursive details that tangle the idea of iteration with the idea of accumulation with the idea of some specific operation like multiplication or addition.

In the case of the RPN calculator the idea of aggregate list operations is combined with the concept of selective dispatch (picking an operation to perform based on what symbol is encountered/matched). The RPN example is relatively simple and small (you can fit all the code in your head at once, it's just a few lines), but until you get used to functional paradigms the process it manifests can make your head hurt. In functional programming a tiny amount of code can create an arbitrarily complex process of unpredictable (or even evolving!) behavior, based just on list operations and selective dispatch; this is very different from the conditional checks, input validation and procedural checking techniques used in other paradigms more common today. Analyzing such behavior is greatly assisted by single assignment and recursive notation, because each iteration is a conceptually independent slice of time which can be contemplated in isolation of the rest of the system. I'm talking a little ahead of the basic question, but this is a core idea you may wish to contemplate as you consider why we like to use operations like folds and recursive notations instead of procedural, multiple-assignment loops.

I hope this helped more than confused.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...