Haskell learning — Grammar


if-then-else
Haskell is an expression-oriented language, so all statements must be able to give a concrete value. For example, we like the iF-else structure:

Prelude> if True then 1

<interactive>:24:15:
    parse error (possibly incorrect indentation or mismatched brackets)

Can’t compile?In fact, if a then 1 can be compiled, the expression will not be able to give a specific value when a is set to False — quite different from imperative languages. Imperative languages are sequences of statements, and it makes little sense to care about the return value of iF-else. Haskel, on the other hand, says that all expressions must give a value, so —

We have to fill in the else:

Prelude> if True then 1 else "foo"

<interactive>:2:14:
    No instance for (Num [Char])
        arising from the literal `1'
    Possible fix: add an instance declaration for (Num [Char])
    In the expression: 1
    In the expression: if True then 1 else "foo"
    In an equation for `it': it = if True then 1 else "foo"

Still doesn’t compile?

The mistake is obvious. As a pure function language, as a language for static type analysis, a pure expression with different types of return values on different inputs, this behavior is simply intolerable. So, change “foo” to 2, and you’ll pass:

Prelude> if True then 1 else 2
1

Here’s a more complex example (don’t forget that the function definition is best placed in a hs file) :

myDrop n xs = if n <= 0 || null xs
              then xs
              else myDrop (n - 1) (tail xs)

This is a function that simulates a drop. Well, it’s not that complicated. It’s just a recurrence or something.

Also note that NULL is a function that accepts a list and returns a Bool to indicate whether the list is empty:

Prelude> :info null
null :: [a] -> Bool 	-- Defined in `GHC.List'

You can also see some of the ways Haskell’s types derive from this example. Why does Haskell know that XS is a list?How does Haskell know the return value of myDrop?How does Haskell infer type contradiction?This isn’t the point of this article, but it’s interesting enough to write about.

First, you can see that null’s input must be a [a], and tail’s must be a [A], so Haskell automatically determines that XS is an [A] type.
Then, you can see from the return of then in if-then-else that one of myDrop’s returns is xs, so myDrop’s return type is at least [a] — xs is of type [a], but sometimes it’s more specific to [Num], Char, etc., so at least.
Finally, according to the else statement, it is now assumed that the return type of myDrop is [a], and myDrop (n-1) (tail XS) conforms to the input of myDrop as a legal statement, so it returns type [a] without other conditions.
The above three steps are a very simple process, but the operation is a little more complicated than this because Haskell also needs to consider various subtyping problems, but the principle is not really complicated. For those interested, see Types+and+Programming+Languages. RWH and TPL match to see the effect of the group!
Also note that it’s actually fine to write this function on the same line, but it’s hard to read:

myDropX n xs = if n <= 0 || null xs then xs else myDropX (n - 1) (tail xs)

Although Haskell isn’t as strict about indentation as Python, don’t omit indentation — it perpetuates an existing definition rather than creating a new one.

Pattern matching

Inertia is evaluated

Many languages provide short-circuit evaluation. So let’s say exp1 &amp in C; & Exp2 expression, exp2 will not be evaluated after exp1 has been evaluated as false. This is a minimization computing strategy for logical operators.

A similar example can easily be found in Haskell

True || (length[1..] > 0)

Remember the infinite list [1..] ?View directly in GHCI [1..] If you do not force the ghci process to end, you will be left with an infinite refresh. But this expression can terminate normally because True on the left short-circuits the calculation on the right.

Unlike short-circuit evaluation, lazy evaluation takes this minimization to the extreme

An expression is evaluated not immediately after it is bound to a variable, but when the value is accessed, i.e., the statement x:=expression; (give the result of an expression assignment to a variable) obvious call this expression was calculated and the results to x, but no matter what is actually in the x first, until later by the reference to the x in the expression and the demand for its value, and the expression evaluation can also be delay, finally in order to generate a symbol for the outside world to see and computing this rapidly growing dependence on the tree. (from
wiki)

Consider the following code:

head [1..]

If it were Python, this would be a big hassle — the arguments passed to the head would have to be specified so that they would hang. However, with the lazy evaluation strategy, it can give the result 1 normally.

Or consider the scarier expression:

head (tail [1..])

Haskell gives the correct result 2, but don’t forget (tail [1..] The result of)
An infinite list too! Therefore, a policy of lazy evaluation does not require that the input be specified, nor does it require that the output be specified. It only requires that the
The statute does not begin until it has to be made.

This lazy evaluation strategy makes it possible for users to construct infinite lists. Let’s say a Fibonacci sequence —

fibs = 0 : 1 : zipWith (+) fibs (tail fibs)

The list is not evaluated at definition time, otherwise it is theoretically endless unless you run out of memory.
What good is an infinite list like
?

Imagine a C programmer one day writing a function that sums the first n terms of Fibonacci, the next day writing a function that sums the first n even terms, and the third day returning the NTH term of Fibonacci. Wouldn’t he be crazy to write Fibonacci over and over again – although the function doesn’t change much each time.

Once the infinite list fiBS is constructed, he doesn’t need to worry about Fibonacci itself, the only thing he needs to write is the function that operates on the list. Even if he had to replace all of Fibonacci with a difference series the next time, he wouldn’t have changed much.

It’s been said that object-oriented languages are the most modular of all, but at this point object-oriented is too Young too Simple.

Haskell’s lazy evaluation isn’t as glamorous as it sounds. Some say it’s good, some say it’s bad. For example, the so-called “lazy accumulation” may be a serious problem.

In Haskell’s implementation of lazy evaluation, it introduces a concept called Thunk (customically, some might call it bottom), which means that an object has not yet been evaluated — which incurs three costs:

First, every time an object is used, it needs to be checked to see if the object has been evaluated, which is not negligible in the case of large amounts of data, and cannot be optimized by the compiler because it is a runtime check.

Second, there is the so-called “inert accumulation”. This may not sound like much — all the computation is necessary, after all — but it makes the performance of the program hard to analyze, which can have a detrimental effect on the testing of large projects.

Finally, all of Haskell’s Thunk must undergo “side effects” to be updated. This means that all of this Thunk must be stored in memory and not in registers. Memory utilization aside, this audacity to ignore the differences in the speed of a computer’s storage hierarchy is Owen’s utopian socialism.

Wang Yin’s blog post reads as follows:

Haskell is actually a very problematic language. Their designers try to make a virtue out of a weakness. When you gently criticize its design, its fans always say in a high voice that there is so much truth in it that you don’t understand. Ironically, this is also the attitude they take when faced with an expert like me who is superior to its designer. Maybe it’s just the way people are: the more you respect them, the more tactful they are, the more they think you don’t understand and continue to try to fool you.

In my view, Haskell is probably just a testing ground for typing systems, rank-n Polymorphism, efficiency and so on-except that the idea that languages must be complete gives Haskell too much to waste. We give pure reason legs, but pull off its wings, may be more easily run on the ground, jump, but can no longer fly in the sky.

In fact, there are more reduce strategies: Full beta-reduction, Normal Order, call by Name, and Call by Value are the four most commonly discussed strategies, while the last one is the most widely used in programming language design. Haskell USES a strategy similar to call by Name, but because thunk exists, each particular value is specified only once — what Wadsworth calls call by need.

Read More: