[JBossCache] - Problem after loading Huge data
by sanatmastan
Hi,
I am new to Jboss cache, so i gone through some examples and try to use them to my requirements. here is my requirement. my aim is to store and retrieve frequency of a particular key (String), so i have written a wrapper methods to add, remove, and get frequency for a key. the tricky part here is to take the advantage of tree structure of Jboss cache to implement TRIES data structure, which means the input key will be splitted into chars and inserted in the tree as nodes and the leaf node contains the frequency. If the same key is inserted twice the frequency at the leaf will increase to 2 and so on. This structure helps me to save some space by sharing some of the roots nodes for the different leaf nodes. I didnt implement any replication or persistence mechanism.
The problem here is when i insert 3mil (50mb) entries i am able to get the frequency in the way i am expected and if i try to insert 10mil entries (of size 105mb) all are getting inserted (ofcouse by increasing vm heap size) with out any error, but if i try to verify the existence of a key, i am getting not exist, can anyone comment on this behaviour?
I have few questions.
1) I would like to know whether Jboss cache framework will suite to my requirement where it contain less depth (20 to 30 levels) and huge breath size (in thousands)?
2) Is there any limitations on Jboss cache memory size?
I would also like to add the data volume of our application it would be around 70Gb which we are planning to cluster them on different VM on a single node.
Thanks in Advance
Sanat
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4116718#4116718
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4116718
18 years, 3 months
[JBossCache] - Re: Buddyrep issue
by FredrikJ
I have now tried to reproduce the issue in a standalone unit test and have succeeded to at least some extent =)
I am now running two caches locally where one is producing data and the other one is inspecting the cache - causing data to gravitate to the second cache. The issue is replicated in the sense that the produces does get a buddy backup node for itself, but I can't seem to get that node to contain any data.
The standalone test is much simpler and single threaded compared to the real life application so thats a plausible reason why we do not see the exact same behaviour (i.e. data inside the redundant buddy backup node).
In short:
The producer (master) is started:
Master created: 192.168.1.135:51469
The slave joins the cluster and is added as a buddy to the master:
MASTER:
| null {}
| /1 {1=c6m0p888dfvz}
| /_BUDDY_BACKUP_ {}
| /192.168.1.135_51470 {}
The slave fetches cache contents and data is gravitated to the slave, thus moving to the slave buddy backup. But we also see the master having a buddy backup for itself (with no data):
MASTER:
| null {}
| /_BUDDY_BACKUP_ {}
| /192.168.1.135_51469 {}
| /192.168.1.135_51470 {}
| /1 {1=c6m0p888dfvz}
The backup node for itself does not contain any data like in our real application but I think it might be symptomatic for the issue.
You can find the full logfiles here:
http://www.cubeia.com/misc/cache/log
You can find the source code for the test here:http://www.cubeia.com/misc/cache
Warning, the code is a bit of a hack =)
To run it, start one instance with the argument 'master' and then another instance with the argument 'slave' directly after. I've put some system properties used in the javadoc of the main class.
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4116710#4116710
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4116710
18 years, 3 months