Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
811 views
in Technique[技术] by (71.8m points)

swift - Unexpectedly large Realm file size

This question is about using two different ways to insert objects into a Realm. I noticed that the first method is a lot faster, but the size result is huge comparing with the second method. The diference between the two approaches is moving the write transaction outside vs inside of the for loop.

// Create realm file
let realm = try! Realm(fileURL: banco_url!)

When I add objects like this, the Realm file grows to 75.5MB:

try! realm.write {
    for i in 1...40000 {
        let new_realm_obj = realm_obj(value: ["id" : incrementID(),
                                              "a": "123",
                                              "b": 12.12,
                                              "c": 66,
                                              "d": 13.13,
                                              "e": 0.6,
                                              "f": "01100110",
                                              "g": DateTime,
                                              "h": 3])

        realm.add(new_realm_obj)
        print("?? (i) Added")
    }
}

When I add objects like this, the Realm file only grows to 5.5MB:

for i in 1...40000 {
    let new_realm_obj = realm_obj(value: ["id" : incrementID(),
                                          "a": "123",
                                          "b": 12.12,
                                          "c": 66,
                                          "d": 13.13,
                                          "e": 0.6,
                                          "f": "01100110",
                                          "g": DateTime,
                                          "h": 3])
    try! realm.write {
        realm.add(new_realm_obj)
        print("?? (i) Added")
    }
}

My Class to add to realm file

class realm_obj: Object {
    dynamic var id = Int()
    dynamic var a = ""
    dynamic var b = 0.0
    dynamic var c = Int8()
    dynamic var d = 0.0
    dynamic var e = 0.0
    dynamic var f = ""
    dynamic var g = Date()
    dynamic var h = Int8()
}

Auto increment function

func incrementID() -> Int {
    let realm = try! Realm(fileURL: banco_url!)
    return (realm.objects(realm_obj.self).max(ofProperty: "id") as Int? ?? 0) + 1
}

Is there a better or correct way to do this? Why do I get such different file sizes in these cases?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

The large file size when adding all of the objects in a single transaction is due to an unfortunate interaction between Realm's transaction log subsystem and Realm's memory allocation algorithm for large blobs. Realm's memory layout algorithm requires that the file size be at least 8x the size of the largest single blob stored in the Realm file. Transaction log entries, summarizing the modifications made during a single transaction, are stored as blobs within the Realm file.

When you add 40,000 objects in one transaction, you end up with a single transaction log entry that's around 5MB in size. This means that the file has to be at least 40MB in size in order to store it. (I'm not quite sure how it ends up being nearly twice that size again. It might be that the blob size is rounded up to a power of two somewhere along the line…)

When you add one object in 40,000 transactions, you still end up with a single transaction log entry only this time it's on a hundred or so bytes in size. This happens because when Realm commits a transaction, it attempts to first reclaim unused transaction log entries before allocating space for new entries. Since the Realm file is not open elsewhere, the previous entry can be reclaimed as each new commit is performed.

realm/realm-core#2343 tracks improving how Realm stores transaction log entries to avoid the significant overallocation you're seeing.

For now my suggestion would be to split the difference between the two approaches and add groups of objects per write transaction. This will trade off a little performance by increasing the number of commits but will reduce the impact of the memory layout algorithm by reducing the size of the largest transaction log entry you create. From a quick test, committing every 2,000 objects results in a file size of around 4MB, while being significantly quicker than adding each object in a separate write transaction.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...