If something can be computed in lambda-calculus, it can be computed. But lambda-calculus is not implementable because it is non-deteministic and computations may not terminate. It is non-deterministic because it doesn’t have a rule for the order in which reductions are applied. Sometimes the same term can have an infinite reduction and a terminating one, depending on the order in which things are done. That’s why we should study other possible calculi to implement. It would be nice to have termination garantees. When a calculus has this property we say that it is strong normalizing (all the derivations terminates). It may also be weak normalizing, meaning that some derivation terminates. The name comes from the fact that a derivation (computation, sequence of reductions) terminates when it reaches a normal form. So, there is the need to define what is a normal form of the calculus. It is important as well to garantee that, if a term has two or more derivations, these should all end up in the same normal form (somewhere during the derivation terms get together again). That is called the confluence property. Therefore, a calculus should also be confluent because we don’t like inconsistencies.

One of the forms of garanteeing normalization could be to define an order in which the reduction rules are applied.

Rewriting systems are systems formed with rules that tells you how things can be rewritten. For example:

s(0) -> 1

s(x + y) -> x + s(y)

So we could say that the lambda-calculus with its beta-reduction is a rewriting system. Actually, I might say that all calculi are rewriting systems. These systems can also define computations depending on the way terms ans rules are defined. They are also worried with termination and confluency.

(Maybe not that organized…)

*Related*