Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
329 views
in Technique[技术] by (71.8m points)

.net - Tracking changes in complex object graph

I started to think about tracking changes in complex object graph in disconnected application. I have already found several solutions but I would like to know if there is any best practice or what solution do you use and why? I passed same question to MSDN forum but I received only single answer. I would like to have more answers to learn from experience of other developers.

This question is related to .NET so for answers with implementation details I prefer answers related to .NET world but I think this is the same on other platforms.

The theoretical problem in my case is defined in multi layered architecture (not necessarily n-tier at the moment) as follows:

  • Layer of repositories using ORM to deal with persistence (ORM tool doesn't matter at the moment but it will most probably be Entity Framework 4.0 or NHibernate).
  • Set of pure classes (persistent ignorant = POCO which is equivalent of POJO in Java world) representing domain objects. Repositories persists those classes and returns them as results of queries.
  • Set of Domain services working with domain entities.
  • Facade layer defining gateway to business logic. Internally it uses repositories, domain services and domain objects. Domain objects are not exposed - each facade method uses set of specialized Data transfer objects for parameter and return value. It is responsibility of each facade method to transform domain entity to DTO and vice-versa.
  • Modern web application which uses facade layer and DTOs - I call this disconnected application. Generally design can change in future so that Facade layer will be wrapped by web service layer and web application will consume that services => transition to 3-tier (web, business logic, database).

Now suppose that one of the domain object is Order which has Order details (lines) and related Orders. When the client requests Order for editation it can modify Order, add, remove or modify any Order detail and add or remove related Orders. All these modifications are done on data in the web browser - javascript and AJAX. So all changes are submited in single shot when client pushes the save button. The question is how to handle these changes? Repository and ORM tool need to know which entities and relationships were modified, inserted or deleted. I ended with two "best" solutions:

  1. Store initial state of DTO in hidden field (in worse case to session). When receiving request to save changes create new DTO based on received data and second DTO based on persisted Data. Merge those two and track changes. Send merged DTO to facade layer and use received information about changes to properly set up entity graph. This requires some manual change tracking in domain object so that change information can be set up from scratch and later on passed to repository - this is the point I am not very happy with.

  2. Do not track changes in DTO at all. When receiving modified data in facade layer create modified entity and load actual state from repository (generally additional query to database - this is the point I am not very happy with) - merge these two entities and automatically track changes by entity proxy provided by ORM tool (Entity framework 4.0 and NHibernate allow this). Special care is needed for concurrency handling because actual state does not have to be the initial state.

What do you think about that? What do you recommend?

I know that some of these challenges can be avoided by using caching on some application layers but that is something I don't want to use at the moment.

My interest in this topic goes even futher. For example suppose that application goes to 3-tier architecture and client (web application) will not be written in .NET = DTO classes can't be reused. Tracking changes on DTO will than be much harder because it will require other development team to properly implement tracking mechanism in their development tools.

I believe these problems have to be solved in plenty of applications, please share you experience.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

It's all about responsibility.

(I'm not sure if this is the sort of answer you're after - let me know if it's not so I can update it).

So we have multiple layers in a system - each is responsible for a different task: data access, UI, business logic, etc. When we architect a system in this way we are (amongst other things) trying to make future change easy by making each component responsible for one task - so it can focus on that one task and do it well. It also makes it easier to modify the system as time passes and change is neeed.

Similar thoughts need to be in mind when considering the DTO - "how to track changes?" for example. Here's how I approach it: The BL is responsible for managing rules and logic; given the stateless nature of the web (which is where I do most of my work) I'm just not tracking the state of an object and looking explicitly for changes. If a user is passing data back (to be saved / updated) I'll pass the whole lot back without caring what's been changed.

One one hand this might seem inefficient but as the amounts of data aren't vast it's just not an issue; on the flipside, there's less "moving parts" less can go wrong as the process is much simpler.

How I pass the data back? -

  • I use DTO's (or perhaps POCO's would be more accurate); when I exchange data between the BL and DAL (via interfaces / DI) the data is exchanged as a DTO (or collection of them). Specifically, I'm using a struct for a single instance and a collection of these structs for multiple.

  • The DTO's are defined in a common class that has very few dependencies.

  • I deliberately try to limit the number of DTO's a create for a specific object (like "Order") - but at the same time I'll make new ones if there is a good reason. Typically I'll have a "fat" DTO which contains most / all of the data available for that object, I'll also probably have a much leaner one that's designed to be used in collections (for lists, etc). In both cases these DTO's are pureyl for returning info for "reading". You have to keep the responsibilities in mind - when the BL asks for data it's usually not trying to write data back at the same time; so the fact that the DTO is "read only" is more about conforming to a clean interface and architecture than a business rule.

  • I always define seperate DTO's for Inserting and Updating - even if they share exactly the same fields. This way the worst that can happen is duplication of some trival code - as opposed to having dependancies and multiple re-use cases to untangle.

Finally - don't confuse how the DAL works with how the UI does; Let ORM's do their thing, just because they store the data in a given way doesn't mean it's the only way.

The most important thing is to specify meaningful interfaces between your layers.

Managing what's changed is the job of the BL; let the UI work in a way that's best for your users and let the BL figure out how it wants to deal with that, and the DAL (via your nice clean interface with DI) just does what it's told.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...