[Design of JBoss ESB] - Re: Supporting entry-point
by tirelli
Jeff,
Oh, I misunderstood you the first time. But yes, as usual you are right! That is how it was designed to be.
Regarding comments, the use of entry points for events is highly recommended because it reduces the matching space for the events and allows for concurrent feeding of data into the engine. End result is better performance and scalability, as you mentioned already.
On top of that, data transformation is also a possibility adding flexibility.
As of Drools 5, we don't support distributed working memory. In case the business case allows for any kind of data partitioning (you know, like sending all customers whose name starts with "A" to one server and "B" to the other server", then it is a good strategy, because it also helps reducing the match space. Of course, the same knowledgebase can be replicated to all servers and sessions instantiated in each of them.
Edson
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4240119#4240119
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4240119
15 years, 6 months
[Design of JBoss ESB] - Re: Supporting entry-point
by jeffdelong
Edson,
In the use case I describe the OrderService and CustomerService are not the producers of the events, the events are produced outside the ESB. The purpose of the use case is to describe how multiple services could receive different event streams and insert these different event streams into the same working memory. The purpose of have different services handle the different event streams, as mentioned in earlier post, would be performance / scalability and autonomy.
Also, could you comment at some point in this thread how greater scalability could be achieved by having the different services on different services in a cluster, and having the working memory distributed between the servers (if this is actually possible?)
Thanks,
Jeff
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4240113#4240113
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4240113
15 years, 6 months
[Design of JBoss ESB] - Re: Supporting entry-point
by tirelli
Hi Jeff,
Let me try to understand this architecture from a higher level point of view, not ESB or Drools specific:
* the order service executes the business tasks it needs to execute, and also publishes events.
* the customer service executes the business tasks it needs to execute, and also publishes events.
* the delivery service is interested in the events published by the order service and by the customer service and uses them to take business decisions.
Is that the case?
>From a business rules/event processing perspective, it seems to me that the delivery service should register itself to listen to the events from the other services, and tie the entry points to whatever publishing mechanism the other services use and not the other way around, but I don't know much about the current ESB architecture, so I will start doing some research right away.
Can you plz comment on my point of view above?
Thanks,
Edson
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4240104#4240104
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4240104
15 years, 6 months
[Design of Messaging on JBoss (Messaging/JBoss)] - Re: Journal Cleanup and Journal Compactor
by clebert.suconic@jboss.com
Before jumping into my proposed solution, let me just make a quick note about what's stored on the journal.
// A list of dataFiles (used files)
private final Queue dataFiles;
//A list of freeFiles
private final Queue freeFiles = new ConcurrentLinkedQueue();
//A list of freeFiles but already opened (for fast move forward on the journal)
private final BlockingQueue openedFiles;
//A list of Adds and updates for each recordID
// This is being renamed to recordsMap BTW
private final ConcurrentMap<Long, PosFiles> posFilesMap;
now the compacting would be:
Startup
exclusiveLockOnJournal (for a very short time, this is required to take a valid snapshot before compacting starts)
{
- Disallow reclaiming while compacting is being done
- set some flag such as compacting = true
- Take a snapshot of dataFiles, posFilesMap and pending transactions.
- I believe
}
- for each dataFile on the snapshot
- append valid records (based on the snapshot) to a new temporary datafile. if the temporary datafile is full, open a new one
- as records are appended, calculate the new posFilesMap
- As soon as the compacting is done, I need to rename temporary files (using the process you originaly described.. with a small mark file)
- I need also to update the posfilesMap.
I will take the list of updates and deletes tha happened while compacting was working, and replay them on the new posfilesMap (in a fast operation).
This is because at this point, I wouldn't have any information about deletes and updates.
- When a delete happen, you only have a neg added to DataFile, and I wouldn't know how to replay the information.
- For updates, I only have a list of what files took an update (inside PosFiles). You could have two updates on a same file, and each updates was sent to a different file
So far I need to compute that information, as I don't have it anywhere.
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4240085#4240085
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4240085
15 years, 6 months
[Design of JBoss ESB] - Re: Supporting entry-point
by jeffdelong
The use case is:
1. OrderService (ESB service) specifies an entry-point named "OrderEntryPoint" and inserts order events into working memory.
2. CustomerService (ESB service) specifies an entry-point named "CustomerEntryPoint" and inserts customer facts into working memory.
3. DeliveryService (ESB service) uses rules, i.e., creates rulebase and working memory (statefulsession) used by used by both of the above entry points.
The rationale for having multiple services use different entry points to insert different facts are:
- Better performance. Some of these types of CEP applications have very high event rates (hundreds or more events per second). Funneling all the events to a single service (single queue) could be a performance bottleneck.
- Cleaner design. Order service and Customer service are independent of each other. They might have other logic that is Order or Customer specific. For example the events may arrive in XML format and require transforming xml to Java and possibly adding some event related data (i.e., timestamp). Another service might be added to insert other types of facts without any impact to these two services.
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4240080#4240080
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4240080
15 years, 6 months
[Design the new POJO MicroContainer] - Re: Deployers Ordering Issue
by richard.opalka@jboss.com
"alesj" wrote : Although thinking about it.
| I actually don't see why it wouldn't be valid.
|
| Perhaps you really want what is stated there,
| that in that particular stage, you have the explicit input/output ordering.
| Regardless that the stages don't match,
| e.g. output coming in later stage than the input
| whereas this would come in play when new deployer with matching input/output is added
This is exactly the reason why I'm saying:
ad1) You can't validate properly deployer dependencies/requirements in insert phase (domino algorithm tries to do it - wrongly)
ad2) You can do it properly only in method DeployersImpl.getDeployersList(String stageName) (i.e. sort lazily on retrieval and detect errors here and not in insert phase)
It's because you don't know the order in which deployers will be added to deployers chain. IOW you can't validate/sort untill you have full chain.
View the original post : http://www.jboss.org/index.html?module=bb&op=viewtopic&p=4240072#4240072
Reply to the post : http://www.jboss.org/index.html?module=bb&op=posting&mode=reply&p=4240072
15 years, 6 months