[JBoss AS7 Development] - Profiling AS 7 using OProfile on Fedora
by Andrig Miller
Andrig Miller [http://community.jboss.org/people/andy.miller] created the document:
"Profiling AS 7 using OProfile on Fedora"
To view the document, visit: http://community.jboss.org/docs/DOC-16814
--------------------------------------------------------------
Several times over the last year I have experimented with various tools for profiling AS. One of the frustrating things with that, is that the overhead of all the traditional Java profiling solutions is so high, that its hard to get meaningful results. With that in mind, I started to turn my attention to system profiling solutions, of which there are two for Linux. One is called perf, and the other oprofile.
Perf, is easier to use, but it currently does not have the ability to give Java method level detailed information. It also has some very interesting capabilities around locks that I would like to exploit, but without the Java method level information its a non-starter. I will definitely keep an eye on perf, because if it can add the method level information, then it may end up being the more useful too. OProfile, on the other hand, does have the capability to show method level information. The other item of note, is that both of these tools measure CPU utililzation, and can only give the method level information for methods that have been compiled to native code via the JIT (Just-in-Time compiler). So, methods that are run interpreted will not show up in the details, and just be lumped together with all other intrepreted methods. So, let's take a look at how to setup OProfile, and use it to profile a running JBoss Application Server.
First, we need to install OProfile. To do this, you can use yum, and the command is as follows:
yum install oprofile oprofile-jit
The second package (oprofile-jit) is necessary to get the Java method level symbol information. Next, depending on the JVM that you are using you may have to install the debug information package for the JVM. I always use OpenJDK, so I have it installed along with its debug information package. The Sun JDK has symbol information in it, so it does not need a separate installation. To install the debug information package for OpenJDK, you first have to enable the debug information repository. In my case, with the system I have been testing this on, I added it to the following:
/etc/yum.repos.d/fedora.repo
[fedora-debuginfo]
name=Fedora $releasever - $basearch - Debug
failovermethod=priority
#baseurl= http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/E... http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/E...
mirrorlist= https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&... https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&...
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
Now, as it turns out, just adding the repository is not enough. If you want to keep the debug information packages in sync with their respective non-debug packages you also need to install a yum plugin that will search the debug information reposiitory for corresponding updates. Otherwise, you will get them out of sync. You can install that plugin via the following command:
yum install yum-plugin-auto-update-debug-info
Now that yum is setup properly, you can install the OpenJDK debug information package via the following command:
yum install java-1.6.0-openjdk-debuginfo
Now you have both OProfile and the necessary symbol information for the JDK installed, and we can move on to profiling a running AS instance.
So, to setup an AS 7 server to be profiled we first have to start the JVM up with the oprofile agent. This is done very simply by adding the following to the JVM command line:
-agentpath:/usr/lib64/oprofile/libjvmti_oprofile.so (for the 64-bit JVM, and /usr/lib/oprofile/libjvmti_oprofile.so for the 32-bit JVM)
This can be found in the standalone.conf or domain.conf in the bin directory for AS 7. The following is an example of what it looks like:
#
# Specify options to pass to the Java VM.
#
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms10240m -Xmx10240m -XX:+UseLargePages -XX:+UseParallelOldGC -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 *-agentpath:/usr/lib64/oprofile/libjvmti_oprofile.so*"
fi
That's all there is to the setup for the AS, but this by itself does not enable profiling. There is a separate daemon process that controls the profiling that has to be started up, and controlled to turn profiling on and off. To start the daemon, and control when to start and stop profiling, there is a control program called opcontrol. The following commands will get things rolling for you. By the way, you can start up the application server at any time, after getting the agent configured on the JVM command line. Just don't start your workload that you want to profile until you are ready with the below opcontrol commands.
opcontrol --start-daemon <-- Start the deamon, but does not start profiling.
opcontrol --start <-- Starts profiling - you want to issue this command after you have started your workload running, or potentially just before.
opcontrol --dump <-- dumps the profiling data out to the default file (this can be done at any time during the workload, or right after the workload completes).
opcontrol --stop <-- Stops profiling, but leaves the daemon running.
opcontrol --shutdown <-- Shuts the daemon down, and it will no longer be running.
Once you have captured profiling data, and have dumped it, you can generate a report. The simpliest report, is done with the following:
opreport -l --output-file=<filename>
This will give you a list, starting with the highest percentage use of CPU cycles to the lowest. Keep in mind that this is a system wide profiler, so it will show everything that was running on the server, not just the java process. There are many other reporting options that can be played with, and for reference, see the following wiki:
http://oprofile.sourceforge.net/doc/index.html http://oprofile.sourceforge.net/doc/index.html
So, if you are interested in helping out with performance tuning AS 7, this is a good place to start.
--------------------------------------------------------------
Comment by going to Community
[http://community.jboss.org/docs/DOC-16814]
Create a new document in JBoss AS7 Development at Community
[http://community.jboss.org/choose-container!input.jspa?contentType=102&co...]
12 years, 10 months
[JBoss AS7 Development] - Profiling AS 7 using OProfile on Fedora
by Andrig Miller
Andrig Miller [http://community.jboss.org/people/andy.miller] created the discussion
"Profiling AS 7 using OProfile on Fedora"
To view the discussion, visit: http://community.jboss.org/message/606152#606152
--------------------------------------------------------------
Several times over the last year I have experimented with various tools for profiling AS. One of the frustrating things with that, is that the overhead of all the traditional Java profiling solutions is so high, that its hard to get meaningful results. With that in mind, I started to turn my attention to system profiling solutions, of which there are two for Linux. One is called perf, and the other oprofile.
Perf, is easier to use, but it currently does not have the ability to give Java method level detailed information. It also has some very interesting capabilities around locks that I would like to exploit, but without the Java method level information its a non-starter. I will definitely keep an eye on perf, because if it can add the method level information, then it may end up being the more useful too. OProfile, on the other hand, does have the capability to show method level information. The other item of note, is that both of these tools measure CPU utililzation, and can only give the method level information for methods that have been compiled to native code via the JIT (Just-in-Time compiler). So, methods that are run interpreted will not show up in the details, and just be lumped together with all other intrepreted methods. So, let's take a look at how to setup OProfile, and use it to profile a running JBoss Application Server.
First, we need to install OProfile. To do this, you can use yum, and the command is as follows:
yum install oprofile oprofile-jit
The second package (oprofile-jit) is necessary to get the Java method level symbol information. Next, depending on the JVM that you are using you may have to install the debug information package for the JVM. I always use OpenJDK, so I have it installed along with its debug information package. The Sun JDK has symbol information in it, so it does not need a separate installation. To install the debug information package for OpenJDK, you first have to enable the debug information repository. In my case, with the system I have been testing this on, I added it to the following:
/etc/yum.repos.d/fedora.repo
[fedora-debuginfo]
name=Fedora $releasever - $basearch - Debug
failovermethod=priority
#baseurl= http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/E... http://download.fedoraproject.org/pub/fedora/linux/releases/$releasever/E...
mirrorlist= https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&... https://mirrors.fedoraproject.org/metalink?repo=fedora-debug-$releasever&...
enabled=0
metadata_expire=7d
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-$basearch
Now, as it turns out, just adding the repository is not enough. If you want to keep the debug information packages in sync with their respective non-debug packages you also need to install a yum plugin that will search the debug information reposiitory for corresponding updates. Otherwise, you will get them out of sync. You can install that plugin via the following command:
yum install yum-plugin-auto-update-debug-info
Now that yum is setup properly, you can install the OpenJDK debug information package via the following command:
yum install java-1.6.0-openjdk-debuginfo
Now you have both OProfile and the necessary symbol information for the JDK installed, and we can move on to profiling a running AS instance.
So, to setup an AS 7 server to be profiled we first have to start the JVM up with the oprofile agent. This is done very simply by adding the following to the JVM command line:
-agentpath:/usr/lib64/oprofile/libjvmti_oprofile.so (for the 64-bit JVM, and /usr/lib/oprofile/libjvmti_oprofile.so for the 32-bit JVM)
This can be found in the standalone.conf or domain.conf in the bin directory for AS 7. The following is an example of what it looks like:
#
# Specify options to pass to the Java VM.
#
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms10240m -Xmx10240m -XX:+UseLargePages -XX:+UseParallelOldGC -Djava.net.preferIPv4Stack=true -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 *-agentpath:/usr/lib64/oprofile/libjvmti_oprofile.so*"
fi
That's all there is to the setup for the AS, but this by itself does not enable profiling. There is a separate daemon process that controls the profiling that has to be started up, and controlled to turn profiling on and off. To start the daemon, and control when to start and stop profiling, there is a control program called opcontrol. The following commands will get things rolling for you. By the way, you can start up the application server at any time, after getting the agent configured on the JVM command line. Just don't start your workload that you want to profile until you are ready with the below opcontrol commands.
opcontrol --start-daemon <-- Start the deamon, but does not start profiling.
opcontrol --start <-- Starts profiling - you want to issue this command after you have started your workload running, or potentially just before.
opcontrol --dump <-- dumps the profiling data out to the default file (this can be done at any time during the workload, or right after the workload completes).
opcontrol --stop <-- Stops profiling, but leaves the daemon running.
opcontrol --shutdown <-- Shuts the daemon down, and it will no longer be running.
Once you have captured profiling data, and have dumped it, you can generate a report. The simpliest report, is done with the following:
opreport -l --output-file=<filename>
This will give you a list, starting with the highest percentage use of CPU cycles to the lowest. Keep in mind that this is a system wide profiler, so it will show everything that was running on the server, not just the java process. There are many other reporting options that can be played with, and for reference, see the following wiki:
http://oprofile.sourceforge.net/doc/index.html http://oprofile.sourceforge.net/doc/index.html
--------------------------------------------------------------
Reply to this message by going to Community
[http://community.jboss.org/message/606152#606152]
Start a new discussion in JBoss AS7 Development at Community
[http://community.jboss.org/choose-container!input.jspa?contentType=1&cont...]
12 years, 10 months
[JBoss AS7 Development] - Format of a Detyped Operation Request
by Brian Stansberry
Brian Stansberry [http://community.jboss.org/people/brian.stansberry] modified the document:
"Format of a Detyped Operation Request"
To view the document, visit: http://community.jboss.org/docs/DOC-16336
--------------------------------------------------------------
The basic method a user of the AS 7 programmatic managment API would use it very simple:
ModelNode execute(ModelNode operation) throws CancellationException, IOException;
where the return value is the detyped representation of the response, and operation is the detyped representation of the operating being invoked.
The purpose of this article is to document the structure of operation.
See http://community.jboss.org/docs/DOC-16354 this page for a discussion of the format of the response.
See http://community.jboss.org/docs/DOC-16317 this page for a more in depth example of using the native management API.
h3. Simple Operations
A text representation of simple operation would look like this:
{
"operation" => "write-core-threads",
"address" => [
("profile" => "production"),
("subsystem" => "threads"),
("bounded-queue-thread-pool" => "pool1")
],
"count" => 0,
"per-cpu" => 20
}
Java code to produce that output would be:
ModelNode op = new ModelNode();
op.get("operation").set("write-core-threads");
ModelNode addr = op.get("address");
addr.add("profile", "production");
addr.add("subsystem", "threads");
addr.add("bounded-queue-thread-pool", "pool1");
op.get("count").set(0);
op.get("per-cpu").set(20);
System.out.println(op);
The order in which the outermost elements appear in the request is not relevant. The required elements are:
* operation -- String -- The name of the operation being invoked.
* address -- the address of the managed resource against which the request should be executed. If not set, the address is the root resource. The address is an +ordered+ list of key-value pairs describing where the resource resides in the overall management resource tree. Management resources are organized in a tree, so the order in which elements in the address occur is important.
The other key/value pairs are parameter names and their values. The names and values should match what is specified in the operation's http://community.jboss.org/docs/DOC-16317 description.
Parameters may have any name, except for operation, address and operation-headers.
h3. Operation Headers
(Note: this information is correct for releases after 7.0.0.Beta2, following completion of JBAS-9112. Prior to that, the headers were at the same level as operation and address.)
Besides the special operation and address values discussed above, operation requests can also include special "header" values that help control how the operation executes. These headers are created under the special reserved word operation-headers:
ModelNode op = new ModelNode();
op.get("operation").set("write-core-threads");
ModelNode addr = op.get("address");
addr.add("base", "domain");
addr.add("profile", "production");
addr.add("subsystem", "threads");
addr.add("bounded-queue-thread-pool", "pool1");
op.get("count").set(0);
op.get("per-cpu").set(20);
op.get("operation-headers", "rollback-on-runtime-failure").set(false);
System.out.println(op);
This produces:
{
"operation" => "write-core-threads",
"address" => [
("profile" => "production"),
("subsystem" => "threads"),
("bounded-queue-thread-pool" => "pool1")
],
"count" => 0,
"per-cpu" => 20
"operation-headers" => {
"rollback-on-runtime-failure => false
}
}
The following operation headers are supported:
* rollback-on-runtime-failure -- boolean, optional, defaults to true. Whether an operation that successfully updates the persistent configuration model should be reverted if it fails to apply to the runtime. Operations that affect the persistent configuration are applied in two stages -- first to the configuration model and then to the actual running services. If there is an error applying to the configuration model the operation will be aborted with no configuration change and no change to running services will be attempted. However, operations are allowed to changed the configuration model even if there is a failure to apply the change to the running services -- if and only if this rollback-on-runtime-failure header is set to false. So, this header only deals with what happens if there is a problem applying an operation to the running state of a server (e.g. actually increasing the size of a runtime thread pool.)
* rollout-plan -- only relevant to requests made to a Domain Controller or Host Controller. See "Operations with a Rollout Plan" for details.
h3. Composite Operations
The root resource managed by a (Domain|Host|Server)Controller will expose an operation named "composite". This operation executes a list of other operations as an atomic unit*. The structure of the request for the "composite" operations has the same fundamental structure as a simple operation (operation name, address, params as key value pairs.
+* See the discussion below of the rollback-on-runtime-failure operation header for how the atomicity requirement can be relaxed.+
{
"operation" => "composite",
"address" => [],
"steps" => [
{
"operation" => "write-core-threads",
"address" => [
("profile" => "production"),
("subsystem" => "threads"),
("bounded-queue-thread-pool" => "pool1")
],
"count" => 0,
"per-cpu" => 20
},
{
"operation" => "write-core-threads",
"address" => [
("profile" => "production"),
("subsystem" => "threads"),
("bounded-queue-thread-pool" => "pool2")
],
"count" => 5,
"per-cpu" => 10
}
],
"rollback-on-runtime-failure" => false
}
The "composite" operation takes a single parameter:
* steps -- a list, where each item in the list has the same structure as a simple operation request. In the example above each of the two steps is modifying the thread pool configuration for a different pool. There need not be any particular relationship between the steps.
The rollback-on-runtime-failure operation header discussed above has a particular meaning when applied to a composite operation, controlling whether steps that successfully execute should be reverted if other steps fail at runtime. Note that if any steps modify the persistent configuration, and any of those steps fail, all steps will be reverted. Partial/incomplete changes to the persistent configuration are not allowed.
h3. Operations with a Rollout Plan
Operations targetted at domain or host level resources can potentially impact multiple servers. Such operations can include a "rollout plan" detailing the sequence in which the operation should be applied to servers as well as policies for detailing whether the operation should be reverted if it fails to execute successfully on some servers.
If the operation includes a rollout plan, the structure is as follows:
{
"operation" => "write-core-threads",
"address" => [
("profile" => "production"),
("subsystem" => "threads"),
("bounded-queue-thread-pool" => "pool1")
],
"count" => 0,
"per-cpu" => 20,
"rollout-plan" => {
"in-series" => [
{
"concurrent-groups" => {
"groupA" => {
"rolling-to-servers" => true,
"max-failure-percentage" => 20
},
"groupB" => undefined
},
"server-group" => {"groupC" => {
"rolling-to-servers" => false,
"max-failed-servers" => 1
}}
},
{"server-group" => {"groupC" => undefined}},
{"concurrent-groups" => {
"groupD" => {
"rolling-to-servers" => true,
"max-failure-percentage" => 20
},
"groupE" => undefined
}}
]
},
"rollback-across-groups" => true
}
As you can see, the rollout plan is simply another structure at the same level as op, op-addr and the operation parameters. The root node of the structure allows two children:
* in-series -- a list -- A list of steps that are to be performed in series, with each step reaching completion before the next step is executed. Each step involves the application of the operation to the servers in one or more server groups. See below for details on each element in the list.
* rollback-across-groups -- boolean -- indicates whether the need to rollback the operation on all the servers in one server group should trigger a rollback across all the server groups. This is an optional setting, and defaults to false.
Each element in the list under the in-series node must have one or the other of the following structures:
* concurrent-groups -- a map of server group names to policies controlling how the operation should be applied to that server group. For each server group in the map, the operation may be applied concurrently. See below for details on the per-server-group policy configuration.
* server-group -- a single key/value mapping of a server group name to a policy controlling how the operation should be applied to that server group. See below for details on the policy configuration. +(Note: there is no difference in plan execution between this and a "concurrent-groups" map with a single entry.)+
The policy controlling how the operation is applied to the servers within a server group has the following elements, each of which is optional:
* rolling-to-servers -- boolean -- If true, the operation will be applied to each server in the group in series. If false or not specified, the operation will be applied to the servers in the group concurrently.
* max-failed-servers -- int -- Maximum number of servers in the group that can fail to apply the operation before it should be reverted on all servers in the group. The default value if not specified is zero; i.e. failure on any server triggers rollback across the group.
* max-failure-percentage -- int between 0 and 100 -- Maximum percentage of the total number of servers in the group that can fail to apply the operation before it should be reverted on all servers in the group. The default value if not specified is zero; i.e. failure on any server triggers rollback across the group.
If both max-failed-servers and max-failure-percentage are set to non-zero values, max-failure-percentage takes precedence.
Looking at the (contrived) example above, application of the operation to the servers in the domain would be done in 3 phases. If the policy for any server group triggers a rollback of the operation across the server group, all other server groups will be rolled back as well. The 3 phases are:
1. Server groups groupA and groupB will have the operation applied concurrently. The operation will be applied to the servers in groupA in series, while all servers in groupB will handle the operation concurrently. If more than 20% of the servers in groupA fail to apply the operation, it will be rolled back across that group. If any servers in groupB fail to apply the operation it will be rolled back across that group.
2. Once all servers in groupA and groupB are complete, the operation will be applied to the servers in groupC. Those servers will handle the operation concurrently. If more than one server in groupC fails to apply the operation it will be rolled back across that group.
3. Once all servers in groupC are complete, server groups groupD and groupE will have the operation applied concurrently. The operation will be applied to the servers in groupD in series, while all servers in groupE will handle the operation concurrently. If more than 20% of the servers in groupD fail to apply the operation, it will be rolled back across that group. If any servers in groupE fail to apply the operation it will be rolled back across that group.
h3. Default Rollout Plan
All operations that impact multiple servers will be executed with a rollout plan. However, actually specifying the rollout plan in the operation request is not required. If no rollout-plan is specified, a default plan will be generated. The plan will have the following characteristics:
* There will only be a single high level phase. All server groups affected by the operation will have the operation applied concurrently.
* Within each server group, the operation will be applied to all servers concurrently.
* Failure on any server in a server group will cause rollback across the group.
* Failure of any server group will result in rollback of all other server groups.
--------------------------------------------------------------
Comment by going to Community
[http://community.jboss.org/docs/DOC-16336]
Create a new document in JBoss AS7 Development at Community
[http://community.jboss.org/choose-container!input.jspa?contentType=102&co...]
12 years, 10 months
[JBoss Tools Development] - How to Build JBoss Tools with Maven 3
by Nick Boldt
Nick Boldt [http://community.jboss.org/people/nickboldt] modified the document:
"How to Build JBoss Tools with Maven 3"
To view the document, visit: http://community.jboss.org/docs/DOC-16604
--------------------------------------------------------------
+*This article is a replacement for its precursor, http://community.jboss.org/docs/DOC-15513 How to Build JBoss Tools 3.2 with Maven 3.*+
h2. Prerequisites
1. Java 1.6 SDK
2. Maven 3
3. Ant 1.7.1 or later
4. About 6 GB of free disk space if you want to run all integration tests for (JBoss AS, Seam and Web Services Tools) - *requires VPN access*
5. subversion client 1.6.X (should work with lower version as well)
h2. Environment Setup
h3. Maven and Java
Make sure your maven 3 is available by default and Java 1.6 is used.
mvn -version
should print out something like
*Apache Maven 3.0.2* (r1056850; 2011-01-08 19:58:10-0500)
*Java version: 1.6.0_20*, vendor: Sun Microsystems Inc.
*Java home: /usr/java/jdk1.6.0_20/jre*
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32.23-170.fc12.i686", arch: "i386", family: "unix"
h2.
h2. Building Locally Via Commandline
To run a local build of JBoss Tools 3.3 against the new Eclipse 3.7-based Target Platform, I suggest a three-step approach:
a) build the parent & target platform poms (v0.0.2-SNAPSHOT) *[ONLY NEEDED WHEN THESE CHANGE]*
b) resolve the target platform to your local disk *[ONLY NEEDED WHEN THESE CHANGE]*
c) build against your local copy of the target platform [every time you change sources and want to rebuild]
Once (a) and (b) are done, you need only perform (c) iteratively until you're happy (that is, until everything compiles). This lets you test changes locally before committing back to SVN.
*(a) and (b) need only be done when the parent pom and Target Platform (TP) change.* Of course if we get these published to nexus then you may not need those first bootstrapping steps. Stay tuned - work in progress.
*a) build the parent & target platform poms (v0.0.2-SNAPSHOT)*
svn co http://svn.jboss.org/repos/jbosstools/trunk jbosstools
cd jbosstools/build/parent
mvn clean install
...
[INFO] Reactor Summary:
[INFO]
[INFO] JBoss Tools Target Platform Definition ............ SUCCESS [0.724s]
[INFO] JBoss Tools Parent ................................ SUCCESS [0.461s]
...
*NOTE: You need not fetch the entire JBoss Tools tree from SVN (or Git (http://divby0.blogspot.com/2011/01/howto-partially-clone-svn-repo-to-git....
*Instead, you can just fetch the build/ folder and one or more component folders, then as before,*
*build the parent pom. After that, go into the component folder and run maven there (#runmavenpercomponent).*
mkdir jbosstools
cd jbosstools
svn co http://svn.jboss.org/repos/jbosstools/trunk/build
svn co http://svn.jboss.org/repos/jbosstools/trunk/jmx
cd jbosstools/build/parent
mvn clean install
...
[INFO] Reactor Summary:
[INFO]
[INFO] JBoss Tools Target Platform Definition ............ SUCCESS [0.724s]
[INFO] JBoss Tools Parent ................................ SUCCESS [0.461s]
...
*b) resolve the target platform to your local disk*
There are two ways to do this:
i) Download and unpack the latest TP zip, *OR*
ii) Resolve the TP using Maven or Ant
+i) Download and unpack the latest TP zip+
You can either download the TP as a zip [5] and unpack it into some folder on your disk. For convenience, the easiest is to unzip into jbosstools/build/target-platform/REPO/, since that's where the Maven or Ant process will by default operate.
You can do that with any browser or on a command line with curl or similar:
curl -C - -O http://download.jboss.org/jbosstools/updates/target-platform_3.3.indigo/e...
...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 606M 100 606M 0 0 164k 0 1:02:54 1:02:54 --:--:-- 172k
and then unzip it:
mkdir jbosstools/build/target-platform/REPO
unzip e37M7-wtp33M7.target.zip -d jbosstools/build/target-platform/REPO
[5] http://download.jboss.org/jbosstools/updates/target-platform_3.3.indigo/e... http://download.jboss.org/jbosstools/updates/target-platform_3.3.indigo/e...
*OR*
*
*
+ii) Resolve the TP using Maven or Ant with wget+
*MAC USERS**:* +you may need to install wget. + +http://download.cnet.com/Wget/3000-18506_4-128268.html Download it here++, or http://www.asitis.org/installing-wget-for-mac-os-x build it here.
+
cd jbosstools/build/target-platform
mvn clean install -Pget.local.target
The get.local.target profile will resolve the target platform file, multiple.target, as a p2 repository on your local disk in ~/3.3.indigo/build/target-platform/REPO/. It may take a while, so you're better off from a speed point-of-view simply fetching the latest zip [5]. However, if you want to see what actually happens to create the TP (as done in Hudson) this is the approach to take.
Since the Maven profile is simply a wrapper call to Ant, you can also use Ant 1.7.1 or later directly:
cd jbosstools/build/target-platform
ant
*c) build against your local copy of the target platform*
*LINUX / MAC USERS*
cd build
mvn clean install -U -B -fae -e -P local.site -Dlocal.site=file:/${HOME}/3.3.indigo/build/target-platform/REPO/ | tee build.all.log.txt
(tee is a program that pipes console output to BOTH console and a file so you can watch the build AND keep a log.)
*WINDOWS USERS*
cd c:\3.3.indigo\build
mvn3 clean install -U -B -fae -e -P local.site -Dlocal.site=file:///C:/3.3.indigo/build/target-platform/REPO/
or
mvn3 clean install -U -B -fae -e -Plocal.site -Dlocal.site=file:///C:/3.3.indigo/build/target-platform/REPO/ > build.all.log.txt
If you downloaded the zip and unpacked is somewhere else, use -Dlocal.site=file:/.../ to point at that folder instead.
#
If you would rather build a single component (or even just a single plugin), go into that folder and run Maven there:
cd ~/3.3.indigo/build/jmx
mvn3 clean install -U -B -fae -e -P local.site -Dlocal.site=file:/${HOME}/3.3.indigo/build/target-platform/REPO/ | tee build.jmx.log.txt
+-- OR, if you prefer to use the "bootstrap profiles": --+
cd ~/3.3.indigo/build
mvn3 clean install -U -B -fae -e -P local.site,jmx-bootstrap -Dlocal.site=file:/${HOME}/3.3.indigo/build/target-platform/REPO/ | teebuild.jmx.log.txt
++
#
h2. Building Locally In Eclipse
First, you must have installed m2eclipse into your Eclipse (or JBDS). You can install the currently supported version from this update site:
http://download.jboss.org/jbosstools/updates/indigo/ http://download.jboss.org/jbosstools/updates/indigo/
Next, start up Eclipse or JBDS and do *File > Import* to import the project(s) you already checked out from SVN above into your workspace.
http://community.jboss.org/servlet/JiveServlet/showImage/102-16604-18-138... http://community.jboss.org/servlet/JiveServlet/downloadImage/102-16604-18...
Browse to where you have the project(s) checked out, and select a folder to import pom projects. In this case, I'm importing the parent pom (which refers to the target platform pom). Optionally, you can add these new projects to a working set to collect them in your Package Explorer view.
http://community.jboss.org/servlet/JiveServlet/showImage/102-16604-18-138... http://community.jboss.org/servlet/JiveServlet/downloadImage/102-16604-18...
Once the project(s) are imported, you'll want to build them. You can either do *CTRL-SHIFT-X,M (Run Maven Build),* or right-click the project and select *Run As > Maven Build*. The following screenshots show how to configure a build job.
First, on the *Main* tab, set a *Name*, *Goals*, *Profile*(s), and add a *Parameter*. Or, if you prefer, put everything in the *Goals* field for simplicity:
+clean install -U -B -fae -e -Plocal.site -Dlocal.site=file://home/nboldt/tmp/JBT_REPO_Indigo/+
Be sure to check *Resolve Workspace artifacts*, and, if you have a newer version of Maven installed, point your build at that *Maven Runtime* instead of the bundled one that ships with m2eclipse.
http://community.jboss.org/servlet/JiveServlet/showImage/102-16604-18-138... http://community.jboss.org/servlet/JiveServlet/downloadImage/102-16604-18...
On the *JRE* tab, make sure you're using a 6.0 JDK.
http://community.jboss.org/servlet/JiveServlet/showImage/102-16604-18-138... http://community.jboss.org/servlet/JiveServlet/downloadImage/102-16604-18...
On the *Refresh* tab, define which workspace resources you want to refresh when the build's done.
http://community.jboss.org/servlet/JiveServlet/showImage/102-16604-18-138... http://community.jboss.org/servlet/JiveServlet/downloadImage/102-16604-18...
On the *Common* tab, you can store the output of the build in a log file in case it's particularly long and you need to refer back to it.
http://community.jboss.org/servlet/JiveServlet/showImage/102-16604-18-138... http://community.jboss.org/servlet/JiveServlet/downloadImage/102-16604-18...
Click *Run* to run the build.
http://community.jboss.org/servlet/JiveServlet/showImage/102-16604-18-138... http://community.jboss.org/servlet/JiveServlet/downloadImage/102-16604-18...
Now you can repeat the above step to build any other component or plugin or feature or update site from the JBoss Tools repo. Simply import the project(s) and build them as above.
h2. Tips and tricks for making BOTH PDE UI and headless Maven builds happy
It's fairly common to have plugins compiling in eclipse while tycho would not work. Basically you could say that tycho is far more picky compared to Eclipse PDE.
h3.
Check your build.properties
Check build.properties in your plugin. If it has warnings in Eclipse, you'll most likely end with tycho failing to compile your sources. You'll have to make sure that you correct all warnings.
Especially check your build.properties to have entries for *source..* and *output..*
*
*
source.. = src/
output.. = bin/
h2.
--------------------------------------------------------------
Comment by going to Community
[http://community.jboss.org/docs/DOC-16604]
Create a new document in JBoss Tools Development at Community
[http://community.jboss.org/choose-container!input.jspa?contentType=102&co...]
12 years, 10 months