What ever your apps are doing and you want them to pass event messages that some task is completed you have to ask your self what type of messaging you need?
- Delivery without any guarantees (means your event consumer app don't care if it don't get message for what ever the reason is)
- Delivery with guaranties
- At least once delivery (means that event consumer app will for sure get message, but sometimes it may get it twice)
- At most once delivery (means that event consumer app will get each message exactly once!)
So, if you have to solve first case, then pick:
- Erlang built in
pg2
- Spawn cluster global OTP process that should act as pub/sub mediator (accepts subscribers and then send to them messages when they are emitted)
If you need messaging delivery guaranties to meet, then you need some "backend" that can persist messages until each known consumer acknowledge it read/process it. One way is to write your own "backend" for this, e.g. using mnesia
or some SQL/NOSQL database, and then on some interval query that backend to see if any event is written. The other way is to use already battle proven bakends, such as RabbitMQ, Kafka... if you need more robust solution.
Please note, that distributed transaction is hard to implement when it comes to messaging. Your task writes to database some data, there may be cases where messaging baccend will be down when write to database is complete, so task processor may not be able to send message to other apps (this also applies to pg2 and global erlang process, no one guarantee that they are up all the time) so you need to think how to recover from such failures. Some small percentage of message may be lost on such infrastructure failures.
One question. Would it be ok if your consumer apps pull from database time to time for new entries? say, task has unique incremental numeric id, then you can use it as read "position" in consumer apps, just write down last written position. Each app should maintain its position during next read interval.
--- EDIT ---
One more thing that may be helpful while thinking how to solve this. If you haven't had chance to read about CQRS and "Event Sourcing", please just briefly google it read about it and see what the concept is. To give you brief into about it is that source of the truth is Event Stream, while any other data storages are just event projections to specific moment of time from event stream (some frameworks or people call it event journal) What is interesting about this "Aritechtual Approach" is that you don't need distributed transactions. While Event Stream has purpose to store changes (inserts and updates) during entity lifetime, some backends that are built for this purpose also can help with pushing events to subscribers. I mentioned already Kafaka, it is often used, but there is also dead simple EventStore.
I'm not saying that you should build CQRS app, just try to find out how pipeline is organized in such architecture, I'm sure it will help you build reliable pipeline for your case.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…