Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
332 views
in Technique[技术] by (71.8m points)

null - About the non-nullable types debate

I keep hearing people talk about how non-nullable reference types would solve so many bugs and make programming so much easier. Even the creator of null calls it his billion dollar mistake, and Spec# has introduced non-nullable types to combat this problem.

EDIT: Ignore my comment about Spec#. I misunderstood how it works.

EDIT 2: I must be talking to the wrong people, I was really hoping for somebody to argue with :-)


So I would guess, being in the minority, that I'm wrong, but I can't understand why this debate has any merit. I see null as a bug-finding tool. Consider the following:

class Class { ... }

void main() {
    Class c = nullptr;
    // ... ... ... code ...
    for(int i = 0; i < c.count; ++i) { ... }
}

BAM! Access violation. Someone forgot to initialize c.


Now consider this:

class Class { ... }

void main() {
    Class c = new Class(); // set to new Class() by default
    // ... ... ... code ...
    for(int i = 0; i < c.count; ++i) { ... }
}

Whoops. The loop gets silently skipped. It could take a while to track down the problem.


If your class is empty, the code is going to fail anyway. Why not have the system tell you (albeit slightly rudely) instead of having to figure it out yourself?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Its a little odd that the response marked "answer" in this thread actually highlights the problem with null in the first place, namely:

I've also found that most of my NULL pointer errors revolve around functions from forgetting to check the return of the functions of string.h, where NULL is used as an indicator.

Wouldn't it be nice if the compiler could catch these kinds of errors at compile time, instead of runtime?

If you've used an ML-like language (SML, OCaml, SML, and F# to some extent) or Haskell, reference types are non-nullable. Instead, you represent a "null" value by wrapping it an option type. In this way, you actually change the return type of a function if it can return null as a legal value. So, let's say I wanted to pull a user out of the database:

let findUser username =
    let recordset = executeQuery("select * from users where username = @username")
    if recordset.getCount() > 0 then
        let user = initUser(recordset)
        Some(user)
    else
        None

Find user has the type val findUser : string -> user option, so the return type of the function actually tells you that it can return a null value. To consume the code, you need to handle both the Some and None cases:

match findUser "Juliet Thunderwitch" with
| Some x -> print_endline "Juliet exists in database"
| None -> print_endline "Juliet not in database"

If you don't handle both cases, the code won't even compile. So the type-system guarantees that you'll never get a null-reference exception, and it guarantees that you always handle nulls. And if a function returns user, its guaranteed to be an actual instance of an object. Awesomeness.

Now we see the problem in the OP's sample code:

class Class { ... }

void main() {
    Class c = new Class(); // set to new Class() by default
    // ... ... ... code ...
    for(int i = 0; i < c.count; ++i) { ... }
}

Initialized and uninitialized objects have the same datatype, you can't tell the difference between them. Occasionally, the null object pattern can be useful, but the code above demonstrates that the compiler has no way to determine whether you're using your types correctly.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...