[JBoss Portal] - Portlet Caches Expiration Doesn't appear to work...
by analyzediz
In the JSP that I use to render the portlet's content, it appears that the portal caches the generated JSP. The portlet is used to provide a customized search functionality, so if the user refreshes the page, it should display the default state of the search form and not the previous search results.
I have tried to set the expiration cache value in the portlet.xml file for my portlet as such:
<expiration-cache>0</expiration-cache>
( By the way the JSR 286 spec states 0 should be the default value for the expiration-cache value...see PLT.22.1 Expiration Cache)
I have also tried to programatically expire the cache using the following code snippet in my doView() method, as such:
response.setProperty( RenderResponse.EXPIRATION_CACHE, "0" );
It is most certainly a caching problem, because it I wait for a significant amount of time and I refresh the portal page, the cached JSP is refreshed to the default state.
Am I doing something wrong here, or is there a known bug that I'm not aware of?
Thanks for any input that can be provided
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4186404#4186404
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4186404
17 years, 5 months
[Security & JAAS/JBoss] - Re: @SecurityDomain, Principal resolution
by Wolfgang Knauf
Hi Christian,
try to enable logging for the security layer, maybe there is some internal error about the properties file not retrieved:
http://www.jboss.org/community/docs/DOC-12198
(question 4)
Up to now, I never used a "JndiLoginInitialContextFactory", but a "NamingContextFactory" and an explicit programmatic login. According to the doc at http://www.jboss.org/community/docs/DOC-11206, "This is useful in context where a JAAS login is not desired", so it sounds like it does not work in your case.
My client code looks like this:
Properties props = new Properties();
| props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "org.jnp.interfaces.NamingContextFactory");
| props.setProperty(Context.URL_PKG_PREFIXES, "org.jboss.naming.client");
| props.setProperty(Context.PROVIDER_URL, "jnp://localhost:1099");
| props.setProperty("j2ee.clientName", ...);
|
| InitialContext initialContext = new InitialContext(props);
|
| AppCallbackHandler callbackHandler = new AppCallbackHandler(user, password.toCharArray() );
| LoginContext loginContext = new LoginContext ("logincontextname", callbackHandler);
| loginContext.login();
For this to work, I have to add a file "auth.conf" to my project (in "META-INF" of the app client). The first line is also the parameter to "LoginContext ":
logincontextname {
| // jBoss LoginModule
| org.jboss.security.ClientLoginModule required
| ;
| };
Hope this helps
Wolfgang
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4186402#4186402
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4186402
17 years, 5 months
[JBoss jBPM] - Re: advice needed, simultaneously active tasks where there i
by rossputin
Hi,
I will go through the process and ensure the tasks are 'blocking'.
What I am keen to do, is gain the ability to recreate this multi-state behavior.
We are using a modified version of the demo JSF webapp to allow users to work through several different processes. So transitioning is similar to in the demo webapp.
At the bottom of 'task.xhtml' we have...
| <gd:repeat value="#{transitions}" var="transition" idVar="rid">
|
| ......
|
| <a href="#" onclick="transitionTaskForm('transition#{rid}', '#{transition.name}');">
| <h:outputText value="#{transition.name}"/>
| </a>
|
| <h:commandLink id="transition#{rid}" style="display:none; visibility: hidden;">
| <h:outputText value="#{transition.name}" rendered="#{! empty transition.name}"/>
| <gs:i rendered="#{empty transition.name}">
| <h:outputText value="End : (unnamed)"/>
| </gs:i>
| <ga:attribute name="rendered" value="#{! task.suspended}"/>
| <f:param name="id" value="#{id}"/>
| <ga:parameter name="id" target="#{id}">
| <f:convertNumber integerOnly="true"/>
| </ga:parameter>
| <j4j:loadTask id="#{id}" target="#{task}"/>
| <j4j:completeTask task="#{task}" transition="#{transition.name}"/>
| <n:nav outcome="success" url="myportal.jsf" storeMessages="true"/>
| <n:nav outcome="error" redirect="true" storeMessages="true"/>
| </h:commandLink>
|
| ......
|
| </gd:repeat>
|
So we hide the commandLink and point to a javascript function which does some client side validation, and then calls either...
transLink.dispatchEvent(evObj);
or
transLink.fireEvent('onclick');
which is a click on the required 'transition' command link...
What I am keen to know, is clicking on these transition links, while the tasks are not 'blocking' possibly causing this multi-state behavior ?
Thanks for your help,
regards,
Ross
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4186372#4186372
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4186372
17 years, 5 months
[EJB 3.0] - EJB 3.1 Embeddable
by karltraunmueller
Hi all,
I tried to get EJB 3.1 Embeddable prototype up & running, and did exactly as described in http://www.jboss.org/community/docs/DOC-9618.
However, I get the following exception:
12:15:39,581 INFO [JBossEJBContainer] Deploying jar:file:/C:/Dokumente%20und%20Einstellungen/ptrau/.m2/repository/org/jboss/ejb3/jboss-ejb3-embedded/1.0.0-SNAPSHOT/jboss-ejb3-embedded-1.0.0-SNAPSHOT.jar!/META-INF/embedded-bootstrap-beans.xml
| 12:15:40,424 ERROR [AbstractKernelController] Error installing to Instantiated: name=BeanMetaDataDeployer state=Described
| java.lang.IllegalArgumentException: Wrong arguments. new for target java.lang.reflect.Constructor expected=[org.jboss.dependency.spi.Controller] actual=[org.jboss.kernel.Kernel]
| at org.jboss.reflect.plugins.introspection.ReflectionUtils.handleErrors(ReflectionUtils.java:395)
| at org.jboss.reflect.plugins.introspection.ReflectionUtils.newInstance(ReflectionUtils.java:153)
| at org.jboss.reflect.plugins.introspection.ReflectConstructorInfoImpl.newInstance(ReflectConstructorInfoImpl.java:106)
| at org.jboss.joinpoint.plugins.BasicConstructorJoinPoint.dispatch(BasicConstructorJoinPoint.java:80)
| at org.jboss.aop.microcontainer.integration.AOPConstructorJoinpoint.createTarget(AOPConstructorJoinpoint.java:276)
| at org.jboss.aop.microcontainer.integration.AOPConstructorJoinpoint.dispatch(AOPConstructorJoinpoint.java:97)
| at org.jboss.kernel.plugins.dependency.KernelControllerContextAction$JoinpointDispatchWrapper.execute(KernelControllerContextAction.java:241)
| ...
|
The exception happens when the AbstractKernelDeployer tries to deploy bean org.jboss.deployers.vfs.deployer.kernel.BeanMetaDataDeployer.
There's an MC JIRA https://jira.jboss.org/jira/browse/JBMICROCONT-176
with the same symptoms and resolution Done (JBossMC-2.0.0.Beta4).
Any ideas/hints?
thanks,
Karl
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4186367#4186367
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4186367
17 years, 5 months
[JBoss Cache: Core Edition] - Re: advice on cache (mis)use for transaction log store
by manik.surtani@jboss.com
Hi, sorry for not responding to this sooner. Answers inline:
"jhalliday" wrote :
| Clearly the number of replicas is critical - it must be high enough to ensure at least one node will survive any outage, but low enough to perform well.
|
| Writes must be synchronous for obvious reasons, but ideally a node that is up should not halt just because another member of the cluster is down. That model would preserve information but reduce availability, which is undesirable.
|
I am guessing that you will have session affinity, i.e., for the non-failing cases, it will always be one instance that works on a single transaction log. Hence, I would recommend using buddy replication, in sync mode (as per your requirement). BR allows you to tune how many backup copies are stored as well, and since the number of backups are fixed, your system will scale well.
"jhalliday" wrote :
| Similarly the crash of one buddy should not halt the system if there is an additional node available such that the total live number remains more than M.
|
The crash of a buddy will not halt the system. It will just attempt to find an alternate buddy. Even if you just end up with one node in the system, it will still run, albeit log some severe warnings that you don't have anywhere to backup to! :-)
"jhalliday" wrote :
| Also, are there any numbers on the performance as a function of groups size, particularly mixing nodes on the same or different network segments. I'm thinking that to get independent failure characteristics for the nodes will probably require a distributed cluster, such that the nodes are on different power supplies etc. Having all the nodes in the same rack probably provides a false sense of security...
|
BR allows you to provide hints when selecting buddies (see the buddy group cfg attribute) so that the system will prefer buddies in the same group. You can then create groups that span racks, e.g., one on each rack.
"jhalliday" wrote :
| On a similar note, whilst cache puts must be synchronous, my design can tolerate asynchronous removes. Is such a hybrid configuration possible?
|
Option.setForceAsynchronous() allows you to set this on a per-invocation basis.
"jhalliday" wrote :
| Critically this is not the same as having all writes go through to disk. Is it possible to configure the cache loaders to write only on eviction?
|
Yes. Set passivation to true in your cache loader cfg.
"jhalliday" wrote :
| Also, it vital to ensure there is no circular dependency between the cache and the transaction manager. I'm assuming this can be achieved simply by ensuring there is no transaction context on the thread at the time the cache API is called. Or does it use transactions JTA anywhere internally?
|
Yes. Just suspend any JTA transactions before making cache calls.
"jhalliday" wrote :
| One final question: Am I totally mad, or only mildly demented?
|
No, this sounds pretty interesting. :-)
Re: Bela's comment about this being write-mostly and hence not suited to a cache, I disagree with this because you have session affinity and the cached dataset by each instance will not overlap. Hence you don't have concurrent writers to the same dataset across the cluster, and hence my suggestion on buddy replication. This feels a lot like HTTP session replication IMO, where only one instance really needs the data; the backup is just if servers die and things get ugly.
Cheers
Manik
View the original post : http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4186365#4186365
Reply to the post : http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4186365
17 years, 5 months