Regularize syntax for lambdas
See original GitHub issueIn light of #4672 it’s time to think of a systematic syntax solution for the various kinds of types for lambdas. So far we have:
(x: T) => R impure dependent function type (term -> term)
[X] => R type lambda (type -> type)
We’d like to reserve ->
for pure functions. This means we are currently lacking a good syntax for polymorphic functions (type -> term). And if we ever would like to introduce dependent functions from terms to types, we are also blocked. In light of this I think it makes sense to revise the syntax of type lambdas to be [X]R
. If we do that, we’d have accounted for:
(x: T) => R term -> term, impure
(x: T) -> T term -> term, pure
[X] => R type -> term, impure
[X] -> R type -> term, pure
(x: T)R term -> type, type-level
[X]R type -> type, type-level
Issue Analytics
- State:
- Created 5 years ago
- Reactions:13
- Comments:27 (23 by maintainers)
Top Results From Across the Web
Regularization Term using Lambda returning "invalid syntax"
lambda is a reserved keyword, you can't use it as a general variable name. Rename the variable; adding a _ to the name...
Read more >Regularization for Simplicity: Lambda | Machine Learning
Encourages the mean of the weights toward 0, with a normal (bell-shaped or Gaussian) distribution.
Read more >Regularization and Cross-Validation — How to choose the ...
Regularization and Cross-Validation — How to choose the penalty value (lambda). Choosing the right hyperparameter values using Cross-Validation.
Read more >Regularize - Regression - MathWorks
Syntax. ens1 = regularize(ens) ens1 = regularize(ens,Name,Value) ... For the default setting of lambda , regularize calculates the smallest value lambda_max ...
Read more >Lambda expressions and anonymous functions | Microsoft Learn
To create a lambda expression, you specify input parameters (if any) on the left side of the lambda operator and an expression or...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
@odersky
For the following reasons, I am not a fan of the
->
/=>
distinction for pure / impure functions:There are infinitely many type of impure functions
Functional programmers are the target audience for pure functions, yet functional programmers care about the distinctions between types of impurity.
For example, functional programmers would like to know merely by looking at a type that a function uses the effect of
Random
or the effect ofClock
, or maybe both effects. We already have good tools for encoding this via polymorphism and context bounds (F[_]: Clock: Random
), which are verbose and could use full-featured, first-class type classes, but which nonetheless exactly satisfy the need to precisely describe the scope a function’s impurity.The use of
=>
for a generic class of impure functions does not satisfy the needs of functional programmers, because the typesA => B
will not describe the capabilities required by functions.I realize the “implicit effect system” sketched out in an older PR is intended to address this limitation, and while this is a separate topic, I feel the implicit effect system will not be used by either functional programmers or OO programmers.
The functional programmers already have better tools and the Java programmers don’t care.
Impure functions are better modeled by pure functions
Functional programmers are the target audience for pure functions, yet functional programmers prefer to model all functions as pure.
Pure functions have a tremendous simplifying power on reasoning and testing. If we see a function
a -> b
, we know that the function is total, deterministic, and free of side-effects. We can reason it using type-based and equational reasoning. We know quite a lot about its implementation merely by inspecting the type, and we know the semantics of any piece of code through substitution.If we see a function
def foo[F[_]: Clock]: F[Long]
, not only do we know this function only uses the effect of the current time, but we know it always returns the same value and we can reason about it uniformly in the same way we reason about strings, collections, numbers, and methods on them. We don’t have to switch our brains over to a mindset in which the ordinary rules of functional programming no longer apply.Moreover, pure functions give us strictly more compositional power than a mix of pure and impure functions. For example, if we have
l: List[A]
, then we can calll.map(putStrLn)
to convert that into, say,List[IO[A]]
, then we can callsequence
to convert that intoIO[List[A]]
. Since all functions are pure, even effectful ones, we can pass effectful functions into higher-order functions.If we switch this over to the pure/impure proposal, then a function like
List.map
will ideally have the type:List[A] -> (A -> B) -> List[B]
. So we can no longer calll.map(println)
. Programmers have strictly less expressive power because they have to abide by the constraints that no effectful code can be passed into higher-order functions that expect non-effectful code. Whereas in functional programming, there is no such restriction, because all functions are pure.Escape hatches will be needed
Since many functions are, from the outside, pure functions, even though they use an impure implementation, an escape hatch will be necessary. Precisely, the escape hatch will allow an impure function (
=>
) to be converted to or used as a pure function (->
).The necessary existence of this escape hatch means that it’s possible to arbitrarily pretend that any impure function is pure. In other words, the benefit of
=>
versus->
is merely documentation to the programmer, because it does not provide any actual guarantees by the compiler.This does not mean it is useless, only that it’s utility is limited to documentation.
If there are impure/pure functions, there should be impure/pure methods
Because methods can be “converted” to functions, and because methods can call functions, in order for a consistent treatment, a bifurcation between pure/impure functions would require a similar bifurcation between pure/impure methods (otherwise there is no information available as to which type of function a method can be converted into). This will introduce additional complexity and confusion, and lead to the need for additional escape hatches.
Many other objections have merit
->
is required at the type level but disallowed at the term level means it’s not possible to look at a code and know if it’s pure / impure.->
has different meanings in term and type position. It’s tuple construction in one, and a pure function type in the other. Contrast this with=>
, which has uniform meaning.Other languages similar to Scala do not take this approach
See, for example, OCaml, 1ML, etc. No impure functional programming languages take this approach, and the pure ones all take different approaches to effects.
In summary, I’d argue the proposal has a large amount of cognitive and pedagogical overhead, introduces a lot of additional complexity into an already complex language, and will not pay for itself given all the limitations mentioned above. It feels like it could facilitate the implicit effect system, but I don’t think that will be used by functional programmers or Java programmers.
I feel like a simpler proposal would be better: perhaps allow an
@impure
annotation on methods / functions and emit warnings (which can be turned into errors) for calling such functions from code not so marked. This way the standard library’s effectful functions can be annotated with@impure
and there’s no new syntax or backward compatibility issues.@jdegoes I did not understand what you were trying to say until I got to your very last paragraph, which made for a frustrating read. So to spare other people the frustration, what you want is (correct me if I’m wrong): to make ‘pure’ the default.
This would mean leaving everything as it is, except for forcing the effectful to declare themselves
@impure
.You use a confrontational tone while pushing for this, but I am not sure this is even controversial. Leaving syntax considerations aside, I think the goal for everyone here is the same: to require purity by default, and to only allow impure code through explicit mechanisms – be it requiring special imports, adding a capability system, using type classes… you name it! Crucially, these are not mutually exclusive approaches.
Different people working in different domains with different skill sets have different needs. Martin and many others think the Haskell way is not always best, let alone always appropriate, so they are trying to implement a system to make the life of those who chose not to use it better. It seems clear that you are not in this category, so why do you even care that much? Again, both camps want purity by default and making impurity explicit, but it’s not like they have to agree on a single “one true way” of dealing with impure code. AFAIK the approaches could even be mixed, for great benefits.
What are you trying to say here? You want to be able to say that things are pure, and that’s what we want to give you. Where is the problem?
Scala has always had
asInstanceOf
as an escape hatch. Does that mean that Scala types are merely documentation to the programmer? No, because we still get the compiler to help us check them. Having an escape hatch does not invalidate that in the least. The same is true for pure functions.I’m not even sure what you think could be an alternative here. Remove
asInstanceOf
from Scala? (Unrelated: I do thinkasInstanceOf
should be put behind a feature flag, along withnull
.) As you know, even languages like Haskell and Idris have these escape hatches.Huh? OCaml’s approach is exactly the status quo currently shared by Scala. I don’t see how that comparison brings in any new insights. Also, 1ML is not even a real programming language – it’s a research prototype and it’s not clear whether what they do for effects makes sense in the context of Scala.