[
https://issues.jboss.org/browse/ISPN-6672?page=com.atlassian.jira.plugin....
]
Galder Zamarreño updated ISPN-6672:
-----------------------------------
Steps to Reproduce:
1. Start docker
2. In docker terminal, run:
{{$ docker run -it --name master -h master -e "SLAVES=1"
gustavonalle/infinispan-server-domain:9.0.0.Alpha2}}
3. Open new terminal and run:
{code}
$ eval $(/usr/local/bin/docker-machine env default)
$ docker run --name spark-master -ti gustavonalle/spark:1.6.0
{code}
4. Open new terminal and run:
{code}
$ eval $(/usr/local/bin/docker-machine env default)
$ docker exec -it spark-master /usr/local/spark/bin/spark-shell --master
spark://172.17.0.3:7077 --packages org.infinispan:infinispan-spark_2.10:0.2 --conf
spark.io.compression.codec=lz4
{code}
5. In the Scala terminal, type (you can copy paste it directly and it should work):
{code}
import org.infinispan.spark._
import org.infinispan.spark.rdd._
import scala.util.Random
val wordList =
scala.io.Source.fromFile("/usr/share/dict/cracklib-words").getLines.foldLeft(List[String]())(
(s, w) => w :: s)
val phrases = (0 to
400).toStream.map(i=>Random.nextInt(wordList.size)).sliding(4,4).map(_.map(wordList).mkString("
")).toSeq
val phraseRDD = sc.parallelize(phrases).zipWithIndex.map(_.swap)
val config = new java.util.Properties
config.put("infinispan.client.hotrod.server_list","172.17.0.2:11222")
phraseRDD.writeToInfinispan(config)
{code}
6. Go to admin console in
http://172.17.0.2:9990, and go to "Cache containers
clustered default (Distributed, Sync, 2 owners )" and you'll see there's been
101 puts but the cache contents are only 50 in the "General Status" tab, even if
there's only one node in the cluster. If you go to the "Nodes" tab, it shows
"Total Entries" being 101 which is correct.
was:
1. Start docker
2. In docker terminal, run:
$ docker run -it --name master -h master -e "SLAVES=1"
gustavonalle/infinispan-server-domain:9.0.0.Alpha2
3. Open new terminal and run:
$ eval $(/usr/local/bin/docker-machine env default)
$ docker run --name spark-master -ti gustavonalle/spark:1.6.0
4. Open new terminal and run:
$ eval $(/usr/local/bin/docker-machine env default)
$ docker exec -it spark-master /usr/local/spark/bin/spark-shell --master
spark://172.17.0.3:7077 --packages org.infinispan:infinispan-spark_2.10:0.2 --conf
spark.io.compression.codec=lz4
5. In the Scala terminal, type:
import org.infinispan.spark._
import org.infinispan.spark.rdd._
import scala.util.Random
val wordList =
scala.io.Source.fromFile("/usr/share/dict/cracklib-words").getLines.foldLeft(List[String]())(
(s, w) => w :: s)
val phrases = (0 to
400).toStream.map(i=>Random.nextInt(wordList.size)).sliding(4,4).map(_.map(wordList).mkString("
")).toSeq
val phraseRDD = sc.parallelize(phrases).zipWithIndex.map(_.swap)
val config = new java.util.Properties
config.put("infinispan.client.hotrod.server_list","172.17.0.2:11222")
phraseRDD.writeToInfinispan(config)
6. Go to admin console in
http://172.17.0.2:9990, and go to "Cache containers
clustered default (Distributed, Sync, 2 owners )" and you'll see there's been
101 puts but the cache contents are only 50 in the "General Status" tab, even if
there's only one node in the cluster. If you go to the "Nodes" tab, it shows
"Total Entries" being 101 which is correct.
Incorrect cache contents entries count
--------------------------------------
Key: ISPN-6672
URL:
https://issues.jboss.org/browse/ISPN-6672
Project: Infinispan
Issue Type: Bug
Components: Console
Affects Versions: 9.0.0.Alpha2, 8.2.2.Final
Reporter: Galder Zamarreño
"Cache Contents / Entries" field in management not correctly calculated. The
screenshot should show 101 entries and not 50. See steps to reproduce.
--
This message was sent by Atlassian JIRA
(v6.4.11#64026)