Monday, August 20, 2018

HAPI FHIR Example Server Using Docker on AWS


Confession: I am not an abstract thinker. In order to learn something new, I have to have it in my hands. So when I was asked to evaluate and understand the FHIR standard, my first course of action was to lift the hood on one of the more mature reference FHIR servers available, the HAPI FHIR JPA Server.
You must realize that following this tutorial will not get you any closer to understanding the FHIR standard … but it WILL give you your own environment where you can inspect every aspect of FHIR, from issuing REST API calls to examining the data model for storage that HAPI chose to use. This setup provides me total transparency from input to process to output, all with technology I am comfortable with. Your goals may be different, I just hope the guide provides value whatever they may be.
By default, the HAPI JPA example server starts up seamlessly using an embedded Apache Derby database and Jetty server. I’m more comfortable with Tomcat and PostgreSQL, and from the documentation, HAPI seemed … well … happy, to let me switch. 

While it was a pleasantly straight forward exercise to set this server up the way I wanted it, there were a few gotchas. This is the guide I wish I had found when I googled “HAPI FHIR JPA server Tomcat PostgreSQL”. 🙂 

I am big Docker and Docker-Compose fan (and with good reason). So I chose to create a docker-compose file to start a Tomcat server and a PostgreSQL database and deploy the HAPI FHIR JPA Server example application to the application server. 

The last bit (that became oh-so-simple thanks to my upfront efforts with Docker) is launching my server on an AWS EC2 instance. This is a normally just a nice environment to work in, but for the HAPI FHIR JPA Server to be of any use to me, it was also a necessity.  You see the sample data that I would have liked to have loaded from the HL7 website does not load properly to the HAPI server for dSTU3. Fortunately the folks at Crucible have some synthetic patient records they will load for you, all you need is a public server URL (hence the AWS move). 

Read on below for the step-by-step details. You can download the Docker artifacts from this article. And all you FHIR gurus out there,  please feel free to let me know where I could have made my life easier!

Create Dockerfiles for Tomcat and PostgrSQL. 

Lukas Pradel has already done a pretty fantastic job of writing a nice tutorial for dockerizing a Tomcat & PostgreSQL setup

There is very little we need to change in the Dockerfiles from that posting.  One important change will be to reference the HAPI FHIR JPA Server example .war. We haven't built this yet, but we will soon. I also changed the username, password and database name to match what I wanted for my application. You will see in the next steps where you will tell the HAPI application what these values are, so keep them handy. 

The Application Server (Tomcat) Dockerfile

FROM tomcat:8-jre8 MAINTAINER gmoran
RUN echo "export JAVA_OPTS=\"-Dapp.env=staging\"" > /usr/local/tomcat/bin/

COPY ./hapi-fhir-jpaserver-example.war /usr/local/tomcat/webapps/fhir.war
CMD ["", "run"]

The Database (PostgreSQL) Dockerfile

FROM postgres:9.4  

ENV POSTGRES_DB fhirdata  

Create a Docker-Compose file to orchestrate and launch the system.

The docker-compose.yml file provided in the post above also works quite well for this application. I chose to stick with port 8080 for simplicity. Also I map port 5432:5432 so that I can use the psql PostgreSQL utility from any machine to interrogate the database tables.

The docker-compose.yml File

app-web: build: ./web ports: - "8080:8080" links: - app-db app-db: build: ./db expose: - "5432" ports: - "5432:5432" volumes_from: - app-db-data app-db-data: image: cogniteev/echo command: echo 'Data Container for PostgreSQL' volumes: - /var/lib/postgresql/data

Our next chore is to build the HAPI FHIR JPA example server. If you are familiar with Git and Maven, it is easy.

Download HAPI source from GIT. 

You can download the HAPI source following these instructions, or use the git clone command-line as follows:

$ git clone

Modify the to wire the database configuration. 

Since we want to use PostgreSQL, we will need to add the PostgreSQL JDBC driver jar to our Maven pom.xml file. The attribute values for the 9.4 version of the jar are as follows: 

In the source code navigate to the class. Navigate first to the hapi-fhir-jpaserver-example folder, then the class is nested down here: 


Using your favorite code editor, make the changes shown in red as follows:

public DataSource dataSource() { BasicDataSource retVal = new BasicDataSource(); retVal.setDriver(new org.postgresql.Driver()); retVal.setUrl("jdbc:postgresql://app-db:5432/fhirdata"); retVal.setUsername("gmoran"); retVal.setPassword("XXXXXXXX"); return retVal; } @Override @Bean() public LocalContainerEntityManagerFactoryBean entityManagerFactory() { LocalContainerEntityManagerFactoryBean retVal = super.entityManagerFactory(); retVal.setPersistenceUnitName("HAPI_PU"); retVal.setDataSource(dataSource()); retVal.setJpaProperties(jpaProperties()); return retVal; } private Properties jpaProperties() { Properties extraProperties = new Properties(); extraProperties.put("hibernate.dialect", org.hibernate.dialect.PostgreSQL94Dialect.class.getName());

Note what we changed.

The driver class must be the PostgreSQL driver class (org.postgresql.Driver()).

The URL is the standard JDBC URL for connecting to a database, comprising the protocol, the host, the port number and the name of the database. The host is an interesting value: app-db. If you look back at our docker-compose.yml file, you will see that we named our container app-db, and therefore, we can reference that name as the hostname for the database. The rest of the values in the URL MUST match the values we set in the docker-compose and Dockerfile configurations.

host: app-db
port: 5432
database: fhirdata
username: gmoran
password: XXXXXXXX

Also note that we changed the hibernate dialect (org.hibernate.dialect.PostgreSQL94Dialect.class.getName()). If for some reason you change the version of PostgreSQL used in the Dockerfile, you will want to make sure this dialect class matches the version you chose.

Once you are satisfied your changes match  the configuration, save this file. We will now return to the command line to build the server using Maven.

Build the HAPI FHIR JPA Server example .war file.

One item to note: With the changes that we made to use PostgreSQL instead of Apache Derby, you will see errors in at the end of the build. These are test failures and did not affect the stability of the server, so I ignored them (don't judge me).

Return to a command line, and navigate in the HAPI source code to the hapi-fhir-jpaserver-example folder. Run the following command from that location:

$ mvn install

Locate the target folder in the hapi-fhir-jpaserver-example folder. There should be a hapi-fhir-jpaserver-example.war file created.

Copy the hapi-fhir-jpaserver-example.war file into the /web subfolder you created or downloaded with the Dockerfiles. 

Spin up an AWS EC2 instance (I used Ubuntu free tier). 

I'm not going to go into great detail on HOW to work with EC2. Amazon does a pretty good job with documentation, and you should be able to find what you need to start a free AWS account and launch an EC2 instance.

Be certain to allow access to SSH and port 8080 (or whatever port you may have used for the web application server) in your AWS Security Group for your server. Allow access to port 5432 as well, if you want to use psql or another database management utility with your PostgrSQL server. You will be given the opportunity to set this access in the EC2 Launch Wizard. 

Install Docker & Docker-Compose on your EC2 instance. 

You'll need to SSH into your EC2 instance to do these next installs.

The Docker guides have great instructions on how to install Docker and Docker-Compose on Ubuntu.

Note that you will want to add the Ubuntu user to the Docker group on your EC2 server so you aren't having to sudo all the time.

Move your Dockerfiles, Docker-Compose script and .war file to your EC2 instance.  

We've come a long way, and we are almost done. The last steps are to move your files to the EC2 instance, and run docker-compose to spin up the Docker containers on the EC2 server.

Once you have the tools installed, exit out of your EC2 instance terminal.

Use the tar utility to compress your files. From the root folder (the folder that holds your docker-compose.yml file), run the following command: 

$ tar cvzf web.tar.gz

You should now have a compressed file in the root folder named web.tar.gz that contains all the necessary files for the server. 

Next, use the SCP utility to upload your files to the EC2 instance. 

scp -i xxx.pem web.tar.gz

The xxx.pem file in the command above should be the .pem file that you saved from AWS when you created your EC2 instance. The hostname should match the hostname of your EC2 server instance.

Once the files are uploaded to your EC2 instance, SSH into the EC2 server once again. Find the folder that holds your web.tar.gz file.  If you followed along exactly as I did it, the file should be in the /home/ubuntu folder.

Use the tar utility once again to extract your files:

$ tar xvf web.tar.gz

Run Docker-Compose to launch your containers.

Navigate to the folder that contains your docker-compose.yml file. Run the following command:

$ docker-compose up

Now, with luck, you should have the HAPI FHIR example server up and running. You can test talking to your server by navigating to it in your browser:

Add Some Example Patient Resources to HAPI.

If you are looking to learn a bit about FHIR, it would be useful to have some FHIR resources to play around with. I found that HAPI also comes with a nifty CLI that will allow you to load data from the HL7 site just for this purpose. Sadly, that example data doesn't work and according to the GitHub issue, no one intends to fix it. 

All is not lost. Crucible is a clever project that has a number of useful tools for testing your FHIR implementations, and they include a site for generating "realistic but not real" patient resource data. This is what I used to load some data into my new server. 

In a browser, navigate to the Load Test Data tool. Enter your server's base URL, select your format and the number of patients you would like generated, and let tool work its magic. 

I loaded 100 patients, and now have a large enough stash to start really learning something about FHIR. But I think that will have to wait until tomorrow, this was enough for one day!