----
Deploy a Java Web App on Tomcat 8 with Habitat
// Chef Blog
Overview
We recently released a new Habitat Package for deploying JavaEE Web application on Tomcat 8. The best way to learn how to take advantage of this new package is with an example, so that's what we'll do in this post.
As part of this example, we'll take the perspective of the Java application developer. We'll make design decisions that are important to a developer, configure our plan in such a way that it best accommodates our developer-centric workflow, and still be able to provide all the automation necessary to deploy our application when handed off to other teams.
The following video is a companion to this article. You may find it helpful to watch the video first before diving into the details of this blog post.
Application Details
Our example application is a simple JavaEE Web application that renders a map of all the National Parks in the United States via a map. Users can browse the map, zoom in, and click on any park to get information about the park.
The application utilizes a MongoDB backend that contains the information about the National Parks. A jQuery front end handles the map rendering and calls to the RESTful backend.
As with most Java application, Maven is used to compile the source files into a WAR file suitable for deployment into Apache Tomcat or various other application containers.
Finally, the sample application is available on GitHub at https://github.com/billmeyer/national-parks.
Plan Files
Considerations
Choosing Habitat to deploy Java web applications has some great benefits worth considering:
- The automation travels with the application.
- Application developers want to focus on writing applications and delivering business value in an efficient, streamlined manor. They don't want to be encumbered by having to figure out the deployment details that extend beyond their own developer's workstation.
- Infrastructure details like identifying the hostname/ip address of the web and database tiers, which environment (QA, UAT, Production) the app is running in are not primary concerns.
- Having the application bundled with the automation needed to run the applications offers the ability to export that package into a variety of different runtime environments or be able to execute them natively.
plan.sh
Lets take a look at the plan.sh
file that we'll be using to package up our national-parks application.
We have a couple of requirements we want to meet as part of its design:
- Because we are a Java developer, we want to be able to pull our source files straight from GitHub as opposed to making a source tarball and pushing it to a staging URL where it can be pulled from.
- Our application layer needs to be able to find its database layer and we don't want to have to hard-code hostname/port information in our configuration files. We want to take advantage of Habitat's runtime binding to be able to locate our needed resources.
So, with these considerations in mind, we can begin to step through our plan.sh
file.
Basic settings
pkg_name=national-parks pkg_description="A sample JavaEE Web app deployed in the Tomcat8 package" pkg_origin=billmeyer pkg_version=0.1.3 pkg_maintainer="Bill Meyer <bill@chef.io>" pkg_license=('Apache-2.0') pkg_source=https://github.com/billmeyer/national-parks pkg_deps=(core/tomcat8 billmeyer/mongodb) pkg_build_deps=(core/git core/maven) pkg_expose=(8080) pkg_svc_user="root"
The majority of these entries are self-explanatory. Worth noting are the following:
- pkg_source – This is a required setting for Habitat. Typically this setting points to a URL where a tarball of the source code can be downloaded. However, since we are pulling our source from GitHub and because we cannot leave this setting blank (the build will fail if we do), we simply put the URL of the GitHub repository to keep Habitat happy.
- pkg_deps – These are the runtime dependencies we have. In our example, we only need core/tomcat8 and billmeyer/mongodb. Its worth noting that because the Tomcat8 package is, itself, dependent upon core/jdk8, we do not need to declare a dependency on core/jdk8.
- pkg_build_deps – These are the build dependencies needed to build our application. They are not needed at runtime so, by only declaring these as build dependencies, we shrink our resulting Habitat package and any other formats we may export to (e.g., a docker container).
- pkg_export – Because we are deploying a web app on Tomcat, we need access to Tomcat's web connector on port 8080.
Callbacks
In the next section of plan.sh
, we supply our own implementation of the available callbacks.
do_download()
We override the do_download() callback simply because we want to pull from GitHub:
do_download() { build_line "do_download() ==================================================" cd ${HAB_CACHE_SRC_PATH} build_line "\$pkg_dirname=${pkg_dirname}" build_line "\$pkg_filename=${pkg_filename}" if [ -d "${pkg_dirname}" ]; then rm -rf ${pkg_dirname} fi mkdir ${pkg_dirname} cd ${pkg_dirname} GIT_SSL_NO_VERIFY=true git clone --branch v${pkg_version} https://github.com/billmeyer/national-parks.git return 0 }
As you can see, this is as simple as creating a new package directory to store our source code in and cloning the appropriate repository from GitHub.
Note: as a matter of convention, our plan's pkg_version matches a git tag we create in GitHub. This allows us to pull a specific release from GitHub that matches the version this plan file was written for.
do_clean() & do_unpack()
Next we provide our own implementation of do_clean() and do_unpack():
do_clean() { build_line "do_clean() ====================================================" return 0 } do_unpack() { # Nothing to unpack as we are pulling our code straight from github return 0 }
Again, these are based on the fact that we are pulling from GitHub so we don't want the default implementation to do anything.
do_build()
Next is the do_build() callback. We need to supply a version that will build our application using Maven:
do_build() { build_line "do_build() ====================================================" # Maven requires JAVA_HOME to be set, and can be set via: export JAVA_HOME=$(hab pkg path core/jdk8) cd ${HAB_CACHE_SRC_PATH}/${pkg_dirname}/${pkg_filename} mvn package }
We building via Habitat, we've declared our dependency on core/jdk8 and core/maven so we can ask Habitat for the location of these packages via the hab pkg path command.
For example, to set JAVA_HOME, we can ask Habitat where the JDK8 installation resides:
[7][default:/src:0]# hab pkg path core/jdk8 /hab/pkgs/core/jdk8/8u92/20160620143238
and set our _JAVA_HOME to point to it:
export JAVA_HOME=$(hab pkg path core/jdk8)
do_install()
With the application source compiled into a WAR file, its time to copy the national-parks.war
file to the Tomcat8 webapps
directory. We do this by overriding the do_install() callback.
do_install() { build_line "do_install() ==================================================" # Our source files were copied over to the HAB_CACHE_SRC_PATH in do_build(), # so now they need to be copied into the root directory of our package through # the pkg_prefix variable. This is so that we have the source files available # in the package. local source_dir="${HAB_CACHE_SRC_PATH}/${pkg_dirname}/${pkg_filename}" local webapps_dir="$(hab pkg path core/tomcat8)/tc/webapps" cp ${source_dir}/target/${pkg_filename}.war ${webapps_dir}/ # Copy our seed data so that it can be loaded into Mongo using our init hook cp ${source_dir}/national-parks.json $(hab pkg path ${pkg_origin}/national-parks)/ }
Here we set a couple of local variables– one that references where our application war file resides and the other where the webapps
directory exists in Tomcat. Then we simply copy the file over to Tomcat.
We also need to copy our national park seed data (national-parks.json
) to install directory so we can load it into MongoDB when our application is initialized.
do_verify()
Lastly, we override the do_verify() callback because we are cloning out of GitHub and there is no source tarball to compare the sha sum to.
do_verify() { build_line "do_verify() ===================================================" return 0 }
Hooks
hooks/init
Now we can begin implementing the hooks we need to automate the deployment and configuration of our application.
Our application comes with seed data that we must load into our MongoDB instance. To do this, we supply our own hooks/init
file that will use the mongoimport tool to load the data from our national-parks.json
file:
#!/bin/bash exec 2>&1 echo "Seeding Mongo Collection" MONGODB_HOME=$(hab pkg path billmeyer/mongodb) source {{pkg.svc_config_path}}/mongoimport-opts.conf echo "\$MONGOIMPORT_OPTS=$MONGOIMPORT_OPTS" # billmeyer/mongodb requirement to run mongoimport properly : ln -s $(hab pkg path core/glibc)/lib/ld-2.22.so /lib/ld-linux-x86-64.so.2 2>/dev/null ${MONGODB_HOME}/bin/mongoimport --drop -d demo -c nationalparks --type json \\ --jsonArray --file $(hab pkg path billmeyer/national-parks)/national-parks.json ${MONGOIMPORT_OPTS}
NOTE: the
mongoimport-opts.conf
file is NOT a configuration file from Mongo's perspective, its a necessity on the Habitat side to get files in the/config
directory to have their variable substitution at runtime. When Habitat runs an application, it will only look for file in the/config
directory ending in a.conf
extension. All other extensions will be ignored. Because we want to use Habitat's Runtime Binding to enable us to locate our running MongoDB instance, we need to supply a file with a.conf
extension so that Habitat will perform the variable substitution we need for runtime binding to work properly. Future releases of Habitat will hopefully remedy this.
hooks/run
We can now look at the hooks/run
file which will be responsible for starting our application. In our case we simply start up Tomcat to run the application.
#!/bin/bash exec 2>&1 echo "Starting Apache Tomcat" export JAVA_HOME=$(hab pkg path core/jdk8) export TOMCAT_HOME="$(hab pkg path core/tomcat8)/tc" source {{pkg.svc_config_path}}/catalina-opts.conf echo "\$CATALINA_OPTS=$CATALINA_OPTS" exec ${TOMCAT_HOME}/bin/catalina.sh run
Runtime Configuration settings
With our hooks implemented, we can begin implementing the configurable settings we want to allow to be overridden at runtime.
config/mongoimport-opts.conf
As mentioned in the Considerations section, our application needs to be told where to find the hostname and port number where our MongoDB instance is running. Furthermore, we want to take advantage of Habitat's Runtime Binding to have the Habitat Supervisor running MongoDB be able to share the hostname and port number when we start up our Tomcat instance as its peer.
As mentioned above, config/mongoimport-opts.conf
isn't a formal configuration file, but rather its a way we can build a file dynamically at runtime that we can then 'source' into our hooks/init
script to supply the dynamically supplied values (in our case, the values for the MongoDB host and port) to mongoimport.
In this example, we start Tomcat with a command similiar to:
$ hab start billmeyer/national-parks --peer 172.17.0.2 --bind database:mongodb.default
By doing this, we create a binding within our Habitat Supervisor with the name database that will be assigned all of the service group information that we are interested in. We can then access the elements of service group (ie., {{ip}} and {{port}}) to tell mongoimport where to connect to.
{{~#if bind.has_database}} {{~#each bind.database.members}} export MONGOIMPORT_OPTS="--host={{ip}} --port={{port}}" {{~/each}} {{~/if}}
config/catalina-opts.conf
Like the config/mongoimport-opts.conf
file above, we need to pass the database host and port via a Java Environment variable (-D) on the command line.
{{~#if bind.has_database}} {{~#each bind.database.members}} export CATALINA_OPTS="-DMONGODB_SERVICE_HOST={{ip}} -DMONGODB_SERVICE_PORT={{port}}" {{~/each}} {{~/if}}
Tomcat will use the CATALINA_OPTS environment variable to push these -D values to the JVM where they can be read by our java code.
Packaging
With all of the necessary plan files authored, we can build our Habitat package.
NOTE: The entire Habitat Plan for this example can be pulled from GitHub:
$ cd ~ $ git clone https://github.com/billmeyer/national-parks-plan.git
Build the national-parks package
- From our
~/national-parks-plan
directory:On macOS, run: $ hab studio enter On Linux, run: $ sudo hab studio enter
- Start the build
[1][default:/src:0]# build : Loading /src/plan.sh national-parks: Plan loaded national-parks: hab-plan-build setup national-parks: Using HAB_BIN=/hab/pkgs/core/hab/0.9.0/20160815225003/bin/hab for installs, signing, and hashing national-parks: Resolving dependencies » Installing core/git ↓ Downloading core/git/2.7.4/20160729215550 ... » Signing /hab/cache/artifacts/.billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.tar.xz ☛ Signing /hab/cache/artifacts/.billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.tar.xz with billmeyer-20160629135755 to create /hab/cache/artifacts/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart ★ Signed artifact /hab/cache/artifacts/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart. mkdir: created directory '/src/results' '/hab/cache/artifacts/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart' -> '/src/results/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart' national-parks: hab-plan-build cleanup national-parks: national-parks: Source Cache: /hab/cache/src/national-parks-0.1.3 national-parks: Installed Path: /hab/pkgs/billmeyer/national-parks/0.1.3/20160826185302 national-parks: Artifact: /src/results/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart national-parks: Build Report: /src/results/last_build.env national-parks: SHA256 Checksum: a7ee5012ca1f30de87686f433cb5092baad1e7525249d73bdb3e527306305f8e national-parks: Blake2b Checksum: b0e2d83b411c302bf649bfbc7e53670e833ff75b808081e6e4837c7bc91c6803 national-parks: national-parks: I love it when a plan.sh comes together. national-parks: national-parks: Build time: 13m17s
Once the build is complete, we have a Habitat package file that we can either run directly or export to other formats for our preferred runtime environment.
Execution
For this example, we want to export our application's Habitat package into a Docker image that we can then run via docker. We also want to export the MongoDB package into a Docker image as well.
Export to docker
From within hab studio execute the following.
- Export the MongoDB docker image.
[2][default:/src:0]# hab pkg export docker billmeyer/mongodb core/hab-pkg-dockerize is not installed Searching for core/hab-pkg-dockerize in remote https://willem.__Habitat__.sh/v1/depot » Installing core/hab-pkg-dockerize ↓ Downloading core/hab-pkg-dockerize/0.9.0/20160815225538 3.93 KB / 3.93 KB / [==================================================================================================================] 100.00 % 45.65 MB/s → Using core/acl/2.2.52/20160612075215 → Using core/attr/2.4.47/20160612075207 → Using core/bash/4.3.42/20160729192720 ... Step 7 : ENTRYPOINT /init.sh ---> Running in 5bbb01fafc60 ---> cac9aae3a725 Removing intermediate container 5bbb01fafc60 Step 8 : CMD start billmeyer/mongodb ---> Running in c0533ce06691 ---> 53c5601a02c6 Removing intermediate container c0533ce06691 Successfully built 53c5601a02c6
- Export the National Parks docker image
[3][default:/src:0]# hab pkg export docker billmeyer/national-parks hab-studio: Creating Studio at /tmp/hab-pkg-dockerize-Nw3A/rootfs (baseimage) > Using local package for billmeyer/national-parks > Using local package for billmeyer/mongodb/3.2.6/20160824195527 via billmeyer/national-parks > Using local package for core/acl/2.2.52/20160612075215 via billmeyer/national-parks ... Step 7 : ENTRYPOINT /init.sh ---> Running in 364991c8ff8d ---> 6472464ba9c0 Removing intermediate container 364991c8ff8d Step 8 : CMD start billmeyer/national-parks ---> Running in bf66f488ba62 ---> 67dbaef2b854 Removing intermediate container bf66f488ba62 Successfully built 67dbaef2b854
- From a new terminal (not in hab studio), verify the docker images exist by executing the following:
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE billmeyer/national-parks 0.1.3-20160826185302 67dbaef2b854 3 minutes ago 710.7 MB billmeyer/national-parks latest 67dbaef2b854 3 minutes ago 710.7 MB billmeyer/mongodb 3.2.6-20160824195527 53c5601a02c6 8 minutes ago 303.2 MB billmeyer/mongodb latest 53c5601a02c6 8 minutes ago 303.2 MB
You should see an entry for billmeyer/national-parks and one for billmeyer/mongodb.
Run the Application
- From a terminal, execute the following:
$ docker run -it -p 27017:27017 billmeyer/mongodb
NOTE: If you are running on Linux, run this command via sudo.
Port 27017 is what Mongo uses to listen for incoming connections so we tell docker to open that port up to external connections.
You will notice as it starts, the Habitat Supervisor displays its IP address:
hab-sup(MN): Starting billmeyer/mongodb hab-sup(TP): Child process will run as user=root, group=hab hab-sup(GS): Supervisor 172.17.0.3: 84a8cbf3-839c-4ad5-bb13-4766e0e5432e hab-sup(GS): Census mongodb.default: fbf93825-d0e3-4866-bde0-56c29454abd1 hab-sup(GS): Starting inbound gossip listener hab-sup(GS): Starting outbound gossip distributor hab-sup(GS): Starting gossip failure detector hab-sup(CN): Starting census health adjuster ...
172.17.0.3 in this example. We will need to pass it as our peer when we start up Tomcat in the next step.
- From a new terminal, execute the following:
$ docker run -it -p 8080:8080 billmeyer/national-parks --peer 172.17.0.3 --bind database:mongodb.default
We start the Tomcat instance and pass the ip address of our MongoDB peer along with a bind option to enable the runtime binding behavior we want to take advantage of as explained earlier.
hab-sup(MN): Starting billmeyer/national-parks hab-sup(TP): Child process will run as user=root, group=hab hab-sup(GS): Supervisor 172.17.0.4: e57136c8-b1d4-4efe-b8e7-308cf33d15e7 hab-sup(GS): Census national-parks.default: 564de0e4-d03f-4523-82b0-3d75346cf5fd hab-sup(GS): Starting inbound gossip listener
As the Tomcat supervisor starts, you can see it join its peer:
hab-sup(GS): Joining gossip peer at 172.17.0.3:9634
hab-sup(GS): Starting outbound gossip distributor hab-sup(GS): Starting gossip failure detector hab-sup(CN): Starting census health adjusterWhen this happens, Habitat triggers a refresh of the files in our plan's
/config
directory. As it updates, we can see confirmation of the update in the startup output:hab-sup(SC): Updated catalina-opts.conf hab-sup(SC): Updated mongoimport-opts.conf hab-sup(TP): Restarting because the service config was updated via the census
Next, our
hooks/init
script runs where we see positive confirmation that the dynamic runtime binding has been applied and that we do, in fact, have a configured IP address and Port available to use:init(PH): Seeding Mongo Collection init(PH): $MONGOIMPORT_OPTS=--host=172.17.0.3 --port=27017
Lastly, our init script loads our seed data into our MongoDB instance:
init(PH): 2016-08-26T19:31:25.254+0000 connected to: 172.17.0.3:27017
init(PH): 2016-08-26T19:31:25.254+0000 dropping: demo.nationalparks init(PH): 2016-08-26T19:31:25.279+0000 imported 359 documentsAt this point, our
hooks/run
script takes over starting Tomcat as normal. - Test access to the application from a web browser.
If you are running docker natively on Linux, you can point your browser directly at the IP address of your Tomcat instance:
http://somehost:8080/national-parks
If you are running docker on macOS, you need to get the ip address from docker using a command like:
$ open "http://$(docker-machine ip default):8080/national-parks"
The post Deploy a Java Web App on Tomcat 8 with Habitat appeared first on Chef Blog.
- From a terminal, execute the following:
----
Shared via my feedly newsfeed
Sent from my iPhone
No comments:
Post a Comment