Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
163 views
in Technique[技术] by (71.8m points)

java - High Performance JMS Messaging

I read slides from this year's UberConf and one of the speakers is making the argument that Spring JMS adds a performance overhead to your message queue system, however I don't see any evidence to support that in the slides. The speaker also makes the case that point-to-point is faster than the traditional "publish-subscribe" method because each message is sent only once instead of being broadcasted to every consumer.

I'm wondering if any experienced Java messaging gurus can weigh-in here and clarify a few technicalities:

  • Is there actually a performance overhead incurred by using Spring JMS instead of just pure JMS? If so, how and where is it introduced? Is there any way around it?
  • What actual evidence is there to support that P2P is faster than the pub-sub model, and if so, are there ever any cases when you would want to pub-sub over P2P (i.e. why go slower?!?)?
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

1) Primary, the overhead of Spring JMS is the use of JmsTemplate to send messages wihtout a caching mechanism underneath. Essentially, JmsTemplate will do the following for each message you send:

  • Create Connection
  • Create Session
  • Create Producer
  • Create Message
  • Send Message
  • Close Session
  • Close connection

This of could be compared to manually written code where you reuse things:

  • Create Connection
  • Create Session
  • Create Producer
  • Create Message
  • Send Message
  • Create Message
  • Send Message
  • Create Message
  • Send Message
  • Close Session
  • Close connection

Since the creation of connections, sessions and producers needs communication between your client and the JMS provider and, of course, resource allocation, it will create pretty large overhead for lots of small messages.

You can easily come around this by caching JMS resources. For instance use the spring CachingConnectionFactory or ActiveMQs PooledConnectionFactory (if you are using ActiveMQ, which you tagged this question with).

If you are running inside a full JavaEE container, pooling/caching is often built in and implicit when you retrieve your JNDI connection factory.

When receving, using spring Default Message Listening Container, there is a thin layer in spring that might add little overhead, but the primary aspects is that you can tweak the performance in terms of concurrency etc. This article explains it very well.

2)

PubSub is a pattern of usage, where the publisher does not need to know which subscribers that exists. You can't simply emulate that with p2p. And, without any proof at hand, I would argu that if you want to send an identical message from one application to ten other applications, a pub-sub setup would be faster than to send the message ten times p2p.

On the other hand, if you only have one producer and one consumer, choose the P2P pattern with queues instead, since it's easier to manage in some aspects. P2P (queues) allows load balancing, which pub/sub does not (as easily).

ActiveMQ also has a hybride version, VirtualDestinations - which essentially is topics with load balancing.

The actual implementation differs by different vendors, but topics and queues are not fundamentally different and should behave with similar performance. What you instead should check on is:

  • Persistence? (=slower)
  • Message selectors? (=slower)
  • Concurrency?
  • Durable subscribers? (=slower)
  • Request/reply, "synchronously" with temporary queues (= overhead = slower)
  • Queue prefetching (=impacts performance in some aspects)
  • Caching

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

2.1m questions

2.1m answers

60 comments

57.0k users

...