tag:blogger.com,1999:blog-314050412024-03-16T11:53:13.019-07:00Data. Analytics. Code. @Everywhere.Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.comBlogger48125tag:blogger.com,1999:blog-31405041.post-71563487294637611102018-08-20T08:48:00.000-07:002018-08-21T05:24:54.539-07:00HAPI FHIR Example Server Using Docker on AWS<style>
h4 {color: darkblue;}
</style>
<br />
<div style="-en-clipboard: true;">
Confession: I am not an abstract thinker. In order to learn something new, I have to have it in my hands. So when I was asked to evaluate and understand the FHIR standard, my first course of action was to lift the hood on one of the more mature reference FHIR servers available, the <a href="http://hapifhir.io/">HAPI FHIR</a> JPA Server.
</div>
<blockquote style="font-size: smaller; font-style: italic;">
<span style="color: #666666;">You must realize that following this tutorial will not get you any closer to understanding the FHIR standard … but it WILL give you your own environment where you can inspect every aspect of FHIR, from issuing REST API calls to examining the data model for storage that HAPI chose to use. This setup provides me total transparency from input to process to output, all with technology I am comfortable with. Your goals may be different, I just hope the guide provides value whatever they may be.
</span></blockquote>
<div>
By default, the HAPI JPA example server starts up seamlessly using an embedded Apache Derby database and Jetty server. I’m more comfortable with Tomcat and PostgreSQL, and from the documentation, HAPI seemed … well … happy, to let me switch.
</div>
<div>
<br /></div>
<div>
While it was a pleasantly straight forward exercise to set this server up the way I wanted it, there were a few gotchas. This is the guide I wish I had found when I googled “HAPI FHIR JPA server Tomcat PostgreSQL”. 🙂
</div>
<div>
<br /></div>
<div>
I am big <a href="https://www.docker.com/">Docker</a> and <a href="https://docs.docker.com/compose/">Docker-Compose</a> fan (and with good reason). So I chose to create a docker-compose file to start a Tomcat server and a PostgreSQL database and deploy the HAPI FHIR JPA Server example application to the application server.
</div>
<div>
<br /></div>
<div>
The last bit (that became oh-so-simple thanks to my upfront efforts with Docker) is launching my server on an AWS EC2 instance. This is a normally just a nice environment to work in, but for the HAPI FHIR JPA Server to be of any use to me, it was also a necessity. You see the <a href="http://hl7.org/fhir/STU3/examples-json.zip">sample data</a> that I would have liked to have loaded from the HL7 website <a href="https://github.com/jamesagnew/hapi-fhir/issues/719">does not load properly to the HAPI server for dSTU3</a>. Fortunately the <a href="https://projectcrucible.org/">folks at Crucible</a> have some synthetic patient records they will load for you, all you need is a public server URL (hence the AWS move).
</div>
<div>
<br /></div>
<div>
Read on below for the step-by-step details. You can <a href="https://drive.google.com/file/d/1I-vuRBrX8P92mU3-0_5DdujRWoR0Rvor/view?usp=sharing" target="_blank">download the Docker artifacts</a> from this article. And all you FHIR gurus out there, please feel free to let me know where I could have made my life easier!</div>
<h4>
Create Dockerfiles for Tomcat and PostgrSQL. </h4>
<div>
<a href="https://lukaspradel.com/" target="_blank">Lukas Pradel</a> has already done a pretty fantastic job of writing <a href="https://blog.lukaspradel.com/dockerizing-a-tomcat-postgresql-java-web-application/" target="_blank">a nice tutorial for dockerizing a Tomcat & PostgreSQL setup</a>. </div>
<div>
<br /></div>
<div>
There is very little we need to change in the Dockerfiles from that posting. One important change will be to reference the HAPI FHIR JPA Server example .war. We haven't built this yet, but we will soon. I also changed the username, password and database name to match what I wanted for my application. You will see in the next steps where you will tell the HAPI application what these values are, so keep them handy. </div>
<div>
<br /></div>
<div style="background-color: whitesmoke;">
<div>
The Application Server (Tomcat) Dockerfile</div>
<hr />
<code>
FROM tomcat:8-jre8
MAINTAINER <span style="color: red;">gmoran</span><br />
RUN echo "export JAVA_OPTS=\"-Dapp.env=staging\"" > /usr/local/tomcat/bin/setenv.sh<br />
<br />
<span style="color: red;">COPY ./hapi-fhir-jpaserver-example.war /usr/local/tomcat/webapps/fhir.war</span><br />
CMD ["catalina.sh", "run"]<br />
</code>
</div>
<br />
<div style="background-color: whitesmoke;">
The Database (PostgreSQL) Dockerfile<br />
<hr />
<span style="font-family: monospace;">FROM postgres:9.4 </span><br />
<span style="font-family: monospace;">MAINTAINER <span style="color: red;">gmoran</span></span><br />
<span style="font-family: monospace;"><br /></span>
<span style="font-family: monospace;">ENV POSTGRES_USER <span style="color: red;">gmoran</span> </span><br />
<span style="font-family: monospace;">ENV POSTGRES_PASSWORD <span style="color: red;">XXXXXXXXX </span> </span><br />
<code>ENV POSTGRES_DB <span style="color: red;">fhirdata</span> </code>
</div>
<h4>
Create a Docker-Compose file to orchestrate and launch the system.
</h4>
<div>
The docker-compose.yml file provided in the post above also works quite well for this application. I chose to stick with port 8080 for simplicity. Also I map port 5432:5432 so that I can use the <b>psql </b>PostgreSQL utility from any machine to interrogate the database tables.<br />
<br /></div>
<div style="background-color: whitesmoke;">
The docker-compose.yml File
<br />
<hr />
<code style="white-space: pre;">
app-web:
build: ./web
ports:
<span style="color: red;">- "8080:8080"</span>
links:
- app-db
app-db:
build: ./db
expose:
- "5432"
<span style="color: red;">ports:
- "5432:5432"</span>
volumes_from:
- app-db-data
app-db-data:
image: cogniteev/echo
command: echo 'Data Container for PostgreSQL'
volumes:
- /var/lib/postgresql/data
</code>
</div>
<div style="font-weight: 400;">
<br />
Our next chore is to build the HAPI FHIR JPA example server. If you are familiar with Git and Maven, it is easy.</div>
<div>
<h4>
Download HAPI source from GIT. </h4>
</div>
<div>
You can download the HAPI source <a href="http://hapifhir.io/doc_jpa.html">following these instructions</a>, or use the git clone command-line as follows:<br />
<br />
<div style="background-color: whitesmoke;">
<code>
$ git clone https://github.com/jamesagnew/hapi-fhir.git
</code>
</div>
</div>
<h4>
Modify the FhirServerConfig.java to wire the database configuration. </h4>
<div>
Since we want to use PostgreSQL, we will need to add the PostgreSQL JDBC driver jar to our Maven pom.xml file. The attribute values for the 9.4 version of the jar are as follows: </div>
<div style="background-color: whitesmoke;">
<pre lang="xml"> <dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.4-1201-SNAPSHOT</version>
</dependency>
</pre>
</div>
<div>
<br />
In the source code navigate to the <b>FhirServerConfig.java</b> class. Navigate first to the <b>hapi-fhir-jpaserver-example</b> folder, then the class is nested down here: </div>
<br />
<div style="background-color: whitesmoke;">
<code>
src/main/java/ca/uhn/fhir/jpa/demo/FhirServerConfig.java</code></div>
<br />
Using your favorite code editor, make the changes shown in red as follows:<br />
<br />
<div style="background-color: whitesmoke;">
<code style="white-space: pre;">
public DataSource dataSource() {
BasicDataSource retVal = new BasicDataSource();
retVal.setDriver(new <span style="color: red;">org.postgresql.Driver()</span>);
retVal.setUrl("<span style="color: red;">jdbc:postgresql://app-db:5432/fhirdata</span>");
retVal.setUsername("<span style="color: red;">gmoran</span>");
retVal.setPassword("<span style="color: red;">XXXXXXXX</span>");
return retVal;
}
@Override
@Bean()
public LocalContainerEntityManagerFactoryBean entityManagerFactory() {
LocalContainerEntityManagerFactoryBean retVal =
super.entityManagerFactory();
retVal.setPersistenceUnitName("HAPI_PU");
retVal.setDataSource(dataSource());
retVal.setJpaProperties(jpaProperties());
return retVal;
}
private Properties jpaProperties() {
Properties extraProperties = new Properties();
extraProperties.put("hibernate.dialect",
<span style="color: red;">org.hibernate.dialect.PostgreSQL94Dialect.class.getName()</span>);
</code>
</div>
<br />
Note what we changed.<br />
<br />
The driver class must be the PostgreSQL driver class (org.postgresql.Driver()).<br />
<br />
The URL is the standard JDBC URL for connecting to a database, comprising the protocol, the host, the port number and the name of the database. The host is an interesting value: <b>app-db</b>. If you look back at our <b>docker-compose.yml</b> file, you will see that we named our container <b>app-db</b>, and therefore, we can reference that name as the hostname for the database. The rest of the values in the URL MUST match the values we set in the docker-compose and Dockerfile configurations.<br />
<br />
<div style="background-color: whitesmoke;">
<code>
<b>host:</b> app-db<br />
<b>port:</b> 5432<br />
<b>database:</b> fhirdata<br />
<b>username:</b> gmoran<br />
<b>password:</b> XXXXXXXX<br />
</code>
</div>
<br />
Also note that we changed the hibernate dialect (org.hibernate.dialect.PostgreSQL94Dialect.class.getName()). If for some reason you change the version of PostgreSQL used in the Dockerfile, you will want to make sure this dialect class matches the version you chose.<br />
<br />
Once you are satisfied your changes match the configuration, save this file. We will now return to the command line to build the server using Maven.<br />
<h4>
Build the HAPI FHIR JPA Server example .war file.
</h4>
<div>
<b>One item to note:</b> With the changes that we made to use PostgreSQL instead of Apache Derby, you will see errors in at the end of the build. These are test failures and did not affect the stability of the server, so I ignored them (don't judge me).<br />
<br />
Return to a command line, and navigate in the HAPI source code to the <b>hapi-fhir-jpaserver-example </b>folder. Run the following command from that location:<br />
<br />
<div style="background-color: whitesmoke;">
<code>
$ mvn install
</code>
</div>
</div>
<div>
<br />
Locate the <b>target</b> folder in the <b>hapi-fhir-jpaserver-example </b>folder. There should be a <b>hapi-fhir-jpaserver-example.war</b> file created.<br />
<br />
Copy the <b>hapi-fhir-jpaserver-example.war</b> file into the <b>/web </b>subfolder you created or downloaded with the Dockerfiles. </div>
<h4>
Spin up an AWS EC2 instance (I used Ubuntu free tier).
</h4>
<div>
I'm not going to go into great detail on HOW to work with EC2. Amazon does a pretty good job with documentation, and you should be able to find what you need to <a href="https://aws.amazon.com/free/" target="_blank">start a free AWS account</a> and <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html" target="_blank">launch an EC2 instance</a>.<br />
<br />
Be certain to <b>allow access to SSH and port 8080</b> (or whatever port you may have used for the web application server) in your AWS Security Group for your server. <b>Allow access to port 5432</b> as well, if you want to use psql or another database management utility with your PostgrSQL server. You will be given the opportunity to set this access in the EC2 Launch Wizard. </div>
<h4>
Install Docker & Docker-Compose on your EC2 instance.
</h4>
<div>
You'll need to <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html" target="_blank">SSH into your EC2 instance</a> to do these next installs.<br />
<br />
The Docker guides have great instructions on how to install <a href="https://docs.docker.com/install/linux/docker-ce/ubuntu/" target="_blank">Docker</a> and <a href="https://docs.docker.com/compose/install/" target="_blank">Docker-Compose</a> on Ubuntu.<br />
<br />
Note that you will want to <a href="https://docs.docker.com/install/linux/linux-postinstall/" target="_blank">add the Ubuntu user to the Docker group</a> on your EC2 server so you aren't having to sudo all the time.</div>
<!--?xml version="1.0" encoding="UTF-8"?-->
<br />
<h4>
Move your Dockerfiles, Docker-Compose script and .war file to your EC2 instance. </h4>
<div>
We've come a long way, and we are almost done. The last steps are to move your files to the EC2 instance, and run <b>docker-compose</b> to spin up the Docker containers on the EC2 server.</div>
<div>
<br /></div>
<div>
Once you have the tools installed, exit out of your EC2 instance terminal.</div>
<div>
<br /></div>
<div>
Use the <b>tar</b> utility to compress your files. From the root folder (the folder that holds your docker-compose.yml file), run the following command: </div>
<div>
<br /></div>
<div style="background-color: whitesmoke;">
<code>
$ tar cvzf web.tar.gz</code></div>
<div>
<br /></div>
<div>
You should now have a compressed file in the root folder named <b>web.tar.gz</b> that contains all the necessary files for the server. </div>
<div>
<br /></div>
<div>
Next, use the <b>SCP</b> utility to upload your files to the EC2 instance. </div>
<div>
<br /></div>
<div style="background-color: whitesmoke;">
<code>
scp -i xxx.pem web.tar.gz ubuntu@my_aws_ec2.compute-1.amazonaws.com:/home/ubuntu</code></div>
<div>
<br />
The <b>xxx.pem</b> file in the command above should be the .pem file that you saved from AWS when you created your EC2 instance. The <b>hostname</b> should match the hostname of your EC2 server instance.<br />
<br />
Once the files are uploaded to your EC2 instance, SSH into the EC2 server once again. Find the folder that holds your <b>web.tar.gz</b> file. If you followed along exactly as I did it, the file should be in the /home/ubuntu folder.<br />
<br />
Use the tar utility once again to extract your files:<br />
<br />
<div style="background-color: whitesmoke;">
<code>
$ tar xvf web.tar.gz</code></div>
<h4>
Run Docker-Compose to launch your containers.</h4>
</div>
<div>
Navigate to the folder that contains your docker-compose.yml file. Run the following command:<br />
<br /></div>
<div style="background-color: whitesmoke;">
<code>
$ docker-compose up</code></div>
<div>
<br />
Now, with luck, you should have the HAPI FHIR example server up and running. You can test talking to your server by navigating to it in your browser:<br />
<br />
<div style="background-color: whitesmoke;">
<code>
http://my_aws_ec2.compute-1.amazonaws.com:8080/fhir/</code></div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm6bGrWzyhp95k-tfWKquopC8UbHerm0QnDHYrHzGLfBHuKpVHia-txkl79QctLC8R_ijMAXMXTKJfFVL0Cn3agc9ZYi5HZgQCitBzxE5ATxI2Tumnoi82kzQg7yahr_-ufEQ_Kg/s1600/new-server.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="775" data-original-width="1600" height="309" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgm6bGrWzyhp95k-tfWKquopC8UbHerm0QnDHYrHzGLfBHuKpVHia-txkl79QctLC8R_ijMAXMXTKJfFVL0Cn3agc9ZYi5HZgQCitBzxE5ATxI2Tumnoi82kzQg7yahr_-ufEQ_Kg/s640/new-server.png" width="640" /></a></div>
<h4>
Add Some Example Patient Resources to HAPI.</h4>
</div>
</div>
<div>
If you are looking to learn a bit about FHIR, it would be useful to have some FHIR resources to play around with. I found that HAPI also comes with a nifty CLI that will allow you to load data from the HL7 site just for this purpose. Sadly, that example data doesn't work and according to the <a href="https://github.com/jamesagnew/hapi-fhir/issues/719" target="_blank">GitHub issue</a>, no one intends to fix it. </div>
<div>
<br /></div>
<div>
All is not lost. <a href="https://projectcrucible.org/" target="_blank">Crucible</a> is a clever project that has a number of useful tools for testing your FHIR implementations, and they include a site for generating "realistic but not real" patient resource data. This is what I used to load some data into my new server. </div>
<div>
<br /></div>
<div>
In a browser, navigate to the <a href="https://projectcrucible.org/testdata" target="_blank">Load Test Data tool</a>. Enter your server's base URL, select your format and the number of patients you would like generated, and let tool work its magic. </div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjg0Qii7D6SjlDRFEK7BRGLMg3RzKXSPtUQ-9xRubRTuCNr3x3GmmIbp7sdy7sh7CdmIgeDm6KsuJg-BDs7gld6JHMZ3cz8yPbyqnaHcKa_7F6eclI6CaTFwOVAhVCFmHFxl7m4Ow/s1600/crucible-test-data.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="946" data-original-width="1554" height="388" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjg0Qii7D6SjlDRFEK7BRGLMg3RzKXSPtUQ-9xRubRTuCNr3x3GmmIbp7sdy7sh7CdmIgeDm6KsuJg-BDs7gld6JHMZ3cz8yPbyqnaHcKa_7F6eclI6CaTFwOVAhVCFmHFxl7m4Ow/s640/crucible-test-data.png" width="640" /></a></div>
<div>
<br /></div>
<div>
I loaded 100 patients, and now have a large enough stash to start really learning something about FHIR. But I think that will have to wait until tomorrow, this was enough for one day!</div>
<div>
<br /></div>
<div>
Cheers, </div>
<div>
G</div>
Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com1tag:blogger.com,1999:blog-31405041.post-40308564968531643062015-02-10T11:08:00.002-08:002015-02-10T11:08:51.929-08:00Today's the day:) Hello Hitachi Data SystemsJust over 10 years, and it has finally happened.<br />
<br />
Hitachi Data Systems intends to acquire our baby, Pentaho.<br />
<br />
I couldn't be more excited. Pentaho is the fast, maneuverable Destroyer coming alongside the Hitachi Battleship, eagerly pursuing dominance of the IoT and Big Data space. I can't think of a better fit for us as a company ready to do big things in a big market, or as a culture of innovators, entrepreneurs and talented, hard-working engineers. So many people have committed themselves to the Pentaho vision for the past decade, people I know like family. Congratulations to my Pentaho family, and Hello Hitachi.<br />
<br />
Looking forward to all we will achieve together :)<br />
<br />
From Pentaho: <a href="http://www.pentaho.com/hitachi-data-systems-announces-intent-to-acquire-pentaho">http://www.pentaho.com/hitachi-data-systems-announces-intent-to-acquire-pentaho</a><br /><br />
From the CEO: <a href="http://blog.pentaho.com/2015/02/10/a-bolder-brighter-future-for-big-data-analytics-and-the-internet-of-things-that-matter-pentaho-hds/">http://blog.pentaho.com/2015/02/10/a-bolder-brighter-future-for-big-data-analytics-and-the-internet-of-things-that-matter-pentaho-hds/</a><br /><br />
Pedro Alves: <a href="http://pedroalves-bi.blogspot.pt/2015/02/big-news-today-hitachi-data-systems-hds.html" target="_blank">http://pedroalves-bi.blogspot.pt/2015/02/big-news-today-hitachi-data-systems-hds.html </a><br /><br />
From Hitachi: <a href="https://community.hds.com/community/innovation-center">https://community.hds.com/community/innovation-center</a><br />
<br />
Bloomberg: <a href="http://www.bloomberg.com/news/articles/2015-02-10/hitachi-to-buy-pentaho-to-bolster-data-analysis-software-tools">http://www.bloomberg.com/news/articles/2015-02-10/hitachi-to-buy-pentaho-to-bolster-data-analysis-software-tools</a>Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com3tag:blogger.com,1999:blog-31405041.post-61796353024374230052014-10-15T18:51:00.003-07:002014-10-15T18:51:44.199-07:00MDX: Converting Second of Day to Standard Time NotationHad a bit of fun with Pentaho Analyzer recently. In the release of <a href="http://www.pentaho.com/download" target="_blank">Pentaho 5.2</a>, we have introduced the ability to define filters across a range of time, which is really handy when your dataset is millions of records per second.<br />
<br />
My use case included keying our time dimension on seconds per day, which results in 86,400 (60 seconds * 60 minutes * 24 hours) unique records; one to represent each unique second in a day. While this is great for simplifying query predicates, it does not help the usability or intuitiveness of the analysis report you are presenting to the user. For instance, who would intuitively understand that 56725 represents 15:45:25 in time?<br />
<br />
So I came up with this user-defined calculation that will convert seconds in a day to standard time notation. Would love to hear from anyone who can optimize this:) This is a valid MDX calculation that Mondrian will process. Since I needed to know the minimum and maximum second per hour in the display, I used the second of day number as a measure. <br />
<br />
<span style="font-family: "Courier New",Courier,monospace;">Format(Int([event scnd of day min]/3600), "00:") || </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> Format(Int(([event scnd of day min] - </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> (Int([event scnd of day min]/3600))*3600)/60), "00:") ||</span><br />
<span style="font-family: "Courier New",Courier,monospace;"> </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> Format([event scnd of day min] - </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> ((Int([event scnd of day min]/3600)*3600) + </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> (Int(([event scnd of day min] - </span><br />
<span style="font-family: "Courier New",Courier,monospace;"> (Int([event scnd of day min]/3600)*3600))/60)*60)), "00") </span><br />
<br />
<br />
Here's what it looks like in Analyzer. The columns Minimum & Maximum Second of Hour have the calculation applied to them. Note the time filter range in the filter panel. Super sweet.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<img alt="" border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiILtOhGwFO1RM_MfEAexjhqXHkgTAYWglRwTfaoWGNC73lwVGAGc3k3eaw5Ql8IeBuWk6OXgEOjApaZ_QEYxO13urfikG_PFxSTZVAI9mJJiuePJTdRj5TPhAx3iuiXvHEXdZyKA/s1600/analyzer.png" height="246" title="" width="400" /></div>
<br />
<br />Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com0tag:blogger.com,1999:blog-31405041.post-88466782791640961812014-09-28T19:17:00.000-07:002014-09-28T19:17:55.926-07:00Hello Docker. Docker. Hmmmm. I really want to love it. Everybody else loves it, so I should right? I think maybe some of the "shiny" isn't so bright using DOCKER ON MY MAC. Although, <a href="http://viget.com/about/team/cjones" target="_blank">Chris J.</a> over at <a href="http://viget.com/" target="_blank">Viget</a> wrote this <a href="http://viget.com/extend/how-to-use-docker-on-os-x-the-missing-guide" target="_blank">blogpost</a> that singularly walked me through each Mac-Docker gotcha with zero pain. Total stand-up guy, IMHO.<br />
<br />
If you are not familiar, Docker plays favorites to Linux based operating systems, and requires a virtual machine wrapper called boot2docker in order to run on a Mac or Windows OS. Not a huge hurdle, but definitely feels heavier and a bit more maintenance intensive ... two of the core pain points in traditional virtual environment deployments that Docker proposes to alleviate. <br />
<br />
Beyond that silliness, there is a whole lot more *nix based scripting than I expected. Somehow I thought the Dockerfile language would be richer, accommodating more decision-based caching. You know, something like cache this command but not this one. As I looked around and read a few comments from the Docker enthusiasts and Docker folks-proper, it seems there is a great desire to keep the Dockerfile and it's DSL ... well ... simple. Limited? Is that a matter of perspective? I can appreciate simple I guess, but I still want to do hard stuff ... and thus I am pushed to the *nix script environment. This may just be a matter of stuffing myself into these new Docker jeans and waiting for them to stretch for comfort:) <br />
<br />
One blessed moment of triumph I would like to share: I was able to write a Dockerfile that would accommodate pulling source from a private Github repository using SSH. This is NOT a difficult Docker exercise. This is a persnickety SSH exercise:) The Docker container needs to register the private SSH key that will pair with the public key that you have registered at Github. At least that is the approach I took. Please do let me know if there are easier / better / more secure alternatives. <br />
<br />
So, the solution. The first few steps, I'm going to assume you know how to do, or can find guidance. They are not related to the container setup.<br />
<br />
I'm going to tell you right up front that my solution does have a weakness (requirement?) that may not be altogether comfortable, and Github downright poo-poos it. In order to get the container to load without human intervention, you need to leave off the passphrase when you generate your SSH keys (Gretchen ducks.). I planned to revisit this thorn, but just simply ran out of time. Would love to hear alternatives to this small snafu. Anyway, if you're still in the game, then read on...<br />
<br />
Here are the steps you should follow to get this container up and running.<br />
<br />
<ol>
<li>Generate a pair of SSH keys for Github, and <a href="https://help.github.com/articles/generating-ssh-keys" target="_blank">register your public key at github.com</a>.</li>
<li>Create a folder for your Docker project. </li>
<li>Place your private SSH key file (id_rsa) in your Docker project folder.</li>
<li>Create your Dockerfile, following the example below.</li>
<li>Build your image, and run your container. </li>
<li>Profit:) </li>
</ol>
<h4>
The Dockerfile</h4>
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">FROM gmoran/my-env<br />MAINTAINER Gretchen Moran gmoran@pentaho.com<br /><br />RUN mkdir -p /root/.ssh<br /><br /># Add this file ... this should be your private GitHub key ...<br />ADD id_rsa /root/.ssh/id_rsa<br /><br />RUN touch /root/.ssh/known_hosts<br />RUN sudo ssh-keyscan -t rsa -p 22 github.com >> /root/.ssh/known_hosts</span><br />
<h4>
Running as <span style="font-family: "Courier New",Courier,monospace;">root</span> User</h4>
I am referencing the root user for this example, since that is the default user that Docker will use when you run the container. If you would like a bit more protection, you can create a user, and run the container with that user with the following command ...<br />
<span style="font-size: small;"><br /></span>
<span style="font-family: "Courier New",Courier,monospace; font-size: small;">USER pentaho</span><br />
<br />
I created the 'pentaho' user as part of a Dockerfile used in the base image gmoran/my-env. IMPORTANT: Note that gmoran/my-env also downloads the OpenSSH daemon and starts is as part of the CMD Dockerfile command.<br />
<h4>
Adding the <span style="font-family: "Courier New",Courier,monospace;">id_rsa</span> File</h4>
The <b><span style="font-family: "Courier New",Courier,monospace;">id_rsa</span></b> file is the private SSH key generated as part of the first step in this process. You can find it in the directory you specified on creation, or in your ~/.ssh directory. <br />
<br />
There are a number of ways to add this key to the container. I chose the simplest ... copy it to the container user's ~/.ssh directory. OpenSSH will look for this key first when attempting to authenticate our Github request. <br />
<h4>
Adding github.com to the <span style="font-size: small;"><span style="font-family: "Courier New",Courier,monospace;">known_hosts</span></span> File </h4>
We add the github.com SSH key to the <b><span style="font-family: "Courier New",Courier,monospace;">known_hosts</span></b> file to avoid the nasty warning and prompt for this addition at runtime.<br />
<br />
In my thrashing on this, I did find several posts in the ether that recommended disabling <span style="font-family: "Courier New",Courier,monospace;">StrictHostChecking</span>, which hypothetically produces the same end result as manufacturing/mod'ing the <b><span style="font-family: "Courier New",Courier,monospace;">known_hosts</span></b> file. This could however leave this poor container vulnerable, so I chose the <b><span style="font-family: "Courier New",Courier,monospace;">known_hosts</span></b> route. <br />
<h4>
At the End of the Day ...</h4>
So at the end of the day, when I thought I would be honing my Docker skills, I actually came away a with a stronger set of Unix scripting skills. Good for me all in all. I am excited about what Docker will become, and I do find the cache to be enough sugar to keep me drinking the Docker kool-aid.<br />
<br />
I should say I appreciate not actually having to struggle with Docker. It is a nice, easy, straight-forward tool with very few surprises (we won't talk about CMD versus ENTRYPOINT). Any time-consuming tasks in this adventure were directly related to my very intentional avoidance of shell scripting, which I now probably have a tiny bit more appreciation for as well. <br />
<br />
In the words of the guy I like the most today, Chris Jones ... Good Guy Docker :) <br />
<br />
<br />
<br />
<br />
<br />Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com0tag:blogger.com,1999:blog-31405041.post-40306388089253404372014-04-15T07:53:00.000-07:002014-04-15T08:18:26.479-07:00 Pentaho Analytics with MongoDBI love technology partnerships. They make our lives as technologists easier by introducing the cross sections of functionality that lie just under the surface of the products, easily missed by the casual observer. When companies partner to bring whole solutions to the market, ideally consumers get more power, less maintenance, better support and lower TCO.<br />
<br />
Pentaho recognizes these benefits, and works hard to partner with technology companies that understand the value proposition of business analytics and big data. The folks over at MongoDB are rock stars with great vision in these spaces, so it was natural for Pentaho and MongoDB to partner up.<br />
<br />
My colleague Bo Borland has written <a href="http://www.packtpub.com/pentaho-analytics-for-mongodb/book" target="_blank">Pentaho Analytics with MongoDB</a>, a book that fast tracks the reader to all the goodness at your fingertips when partnering Pentaho Analytics and MongoDB for your analytics solutions. He gets right to the point, so be ready to roll up your sleeves and dig into the products right from page 1 (or nearly so). This book is designed for technology ninjas that may have a bit of MongoDB and/or Pentaho background. In a nutshell, reading the book is a straight shot to trying out all of the integration points between the MongoDB database and the Pentaho suite of products. <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://dgdsbygo8mp3h.cloudfront.net/sites/default/files/imagecache/productview_larger/8355OS_Cover.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://dgdsbygo8mp3h.cloudfront.net/sites/default/files/imagecache/productview_larger/8355OS_Cover.jpg" height="200" width="163" /></a></div>
<br />
You can get a copy of <a href="http://www.packtpub.com/pentaho-analytics-for-mongodb/book" target="_blank">Pentaho Analytics with MongoDB here.</a> Also continue to visit the <a href="htp://wiki.pentaho.com" target="_blank">Pentaho wiki</a>, as these products move fast. Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com1tag:blogger.com,1999:blog-31405041.post-5093539135539465322014-03-07T11:32:00.000-08:002014-03-18T09:56:31.348-07:00Pentaho's Women in Tech: In Good Company I was honored this week to be included in a <a href="http://blog.pentaho.com/tag/womensday/" target="_blank">a blog series</a> that showcases just a few of the great women I work with, in celebration of <a href="http://www.internationalwomensday.com/" target="_blank">International Women's Day</a> on March 8.<br />
<br />
Check out the <a href="http://blog.pentaho.com/2014/03/" target="_blank">series</a>, I think you'll find the common theme in the interviews interesting and inspiring. Pass on the links if you have girls in your life that could be interested in pursuing technology as a career. Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com3tag:blogger.com,1999:blog-31405041.post-78864854208286321632012-12-14T05:19:00.000-08:002013-12-31T21:09:36.493-08:00Pentaho's 12 Days of VisualizationsIf you are interested in the ultimate extendability of Pentaho's visualization layer, you'll love this fun holiday gift from Pentaho: 12 Days of Visualizations. Check back each date marked for a new plugin that demonstrates Pentaho leveraging cool viz packages like Protovis, D3 and more.<br />
<br />
<a href="http://wiki.pentaho.com/display/COM/Visualization+Plugins">http://wiki.pentaho.com/display/COM/Visualization+Plugins</a><br />
<br />
Today's visualization: the Sunburst! <br />
<br />
<img src="http://wiki.pentaho.com/download/attachments/23539255/sunburst_example_small.png?version=1&modificationDate=1354816819000" style="border: 0px solid black;" /> <br />
<br />
<br />
Merry Christmas and Happy New Year!<br />
<br />Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com0tag:blogger.com,1999:blog-31405041.post-41862538829559487812012-09-27T07:52:00.000-07:002012-09-27T07:52:38.287-07:00Resolving "AppName is damaged and can't be opened." Don't move it to the trash! I recently stumbled across this problem with one of Pentaho's applications. When the application was downloaded and installed on a Mac, launching the .app file resulted in "This app is damaged and can't be opened. Move to the trash".<br />
<br />
Relatively quickly with a few searches, we figure out that GateKeeper was the messenger, but why was she being so harsh? Our apps are unsigned (a signature improvement slated for the next release), but damaged? I was offended.<br />
<br />
As it turns out, Apple has a <a href="http://support.apple.com/kb/HT5290">decent support article</a> that explains why you might get a "damaged..." message versus GateKeeper's standard message warning the user that the application is unsigned.<br />
<br />
The answer to softening GateKeeper's tone (AKA getting her to only prompt with a security message rather than a "damaged" message) lies within the <b>info.plist</b> file within the .app. Kurtis, our .app builder, found that if he sets the following values, then the .app reverts to being a harmless unsigned .app.<br />
<br />
<pre style="background-color: #eeeeee; border: 1px dashed #999999; color: black; font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace; font-size: 12px; line-height: 14px; overflow: auto; padding: 5px; width: 100%;"><code><key>CFBundleSignature</key>
<string>????</string>
</code></pre>
<br />
I hope this solution saves someone else the heartache of deploying a"damaged" .app file.<br />
<br />
<div style="text-align: left;">
kindest regards, </div>
<div style="text-align: left;">
Gretchen</div>
Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com6tag:blogger.com,1999:blog-31405041.post-15685818453090397762011-10-08T18:26:00.001-07:002011-10-08T18:35:11.697-07:00Pentaho and OpenMRS IntegrationWe have a great opportunity to explore how Pentaho can provide ETL, analytics, and reporting benefits to <a href="http://www.openmrs.org">OpenMRS</a>, an open source medical records platform and community interested in global health care. <br /><br />Check out the first projects underway, and decide if you have time to participate:<br /><br /><a href="https://wiki.openmrs.org/pages/viewpage.action?pageId=27689370">Pentaho ETL and Designs for Dimensional Modeling</a><br /><br /><a href="https://wiki.openmrs.org/display/projects/Cohort+Queries+as+a+Pentaho+Reporting+Data+Source">Cohort Queries as Pentaho Reporting Datasource</a><br />This project still needs a lead developer; we'd like to have these projects run in tandem. <br /><br />To get involved, feel free to email me directly, or contact any of the OpenMRS mentors listed in the projects. <br /><br />kindest regards and in His grace,<br />GretchenGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com2tag:blogger.com,1999:blog-31405041.post-61757671227475962472011-10-04T08:43:00.000-07:002011-10-06T13:45:26.914-07:00PCM11: Continuity and Change @ PentahoLast week, I enjoyed my third (of four) Pentaho Community meetup, this year held in Rome (Frascati), Italy. Jan Aertsen did a fantastic job summarizing the presentations, you can review them all <a href="http://kjube.blogspot.com/2011/09/pentaho-community-gathering-live.html">here</a>, including access to the presentation materials. At this particular juncture, I find myself in my longest commitment to a single company in my career. The entire ride has this very cool thread of continuity through tides of swift and constant change that comes with being a bleeding edge software company.<br /><br />When I look back over the past seven years, many times I focus solely on Pentaho milestones and growth, the markets we've entered and enjoyed success with, the new initiatives that take hold. PCM11 gave me a look at the global reach of success that Pentaho enjoys, creating opportunity and economy beyond the bounds of the company official. This is what makes open source make sense to me. This appeals to me.<br /><br />The people that make up the Pentaho community are a talented, committed group of individuals who are growing in their own endeavors, many based on the community edition of the Pentaho BI Suite of tools. Many of our community colleagues have been committed to Pentaho from the earliest releases of 2004 and 2005. Their efforts are paying off, and while Pentaho the company doesn't get everything right, we've managed to earn the respect and partnership of some incredibly driven and talented people.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk-CWAQa5pmP-0zekoccQs9_FX1YVCjbNRxh1H2btJxHRQfj3oBpjnnLwmDPAriHfcZalSI8WL7JOPc8P6YiitfyWetSVsP0xhQYBU5LpIm0pmLlbsOEFaxiaJmvfts3KBbBdFLg/s1600/pcm11_group.jpg"><img style="display:block; margin:0px auto 10px; text-align:center;cursor:pointer; cursor:hand;width: 320px; height: 230px;border-style:none;border-width:0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk-CWAQa5pmP-0zekoccQs9_FX1YVCjbNRxh1H2btJxHRQfj3oBpjnnLwmDPAriHfcZalSI8WL7JOPc8P6YiitfyWetSVsP0xhQYBU5LpIm0pmLlbsOEFaxiaJmvfts3KBbBdFLg/s320/pcm11_group.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5660482481728439874" /></a><br /><br />Another interesting phenomena - community members becoming Pentaho employees, blurring any lines that get drawn at times between community and corporate. <br /><br />From the ranks of the Pentaho community a well of talent has sprung - Slawo, Roland, Jan, Jens, and a handful of others. Pentaho is incredibly savvy in hiring from the community. Our community is the hotbed of Pentaho, DBA, big data, analytic and reporting knowledge, both from a project development perspective and from a solutions development perspective. How many software projects suffer from the writers not understanding the use cases? Not eating their own dog food? Well, the newest Pentaho developers have been at that bowl for some time, and the internal developers can help them keep that commitment with internal initiatives delivering Pentaho solution driven information.<br /><br />And what of the other direction? Those leaving the formal Pentaho realm and working entirely community based? Well, that would be me. It's not like this is new news - I'm now infamous for my off-again, on-again relationship with formal employment :) Don't mistake me for irresponsible; I just have higher priorities. We all should be so blessed, right?<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWAbZNQqdu1WXb4jrElAlROHMLjZGVm0zu_PhxL-n85oXHd2ZhH4R5dTkIyavPU27DaBIxBqx3DMURBP3kgGroQnpk_atOsavwzCidBc4WrmDu6_9JOUfNDnyloPjpQuK8iho7mw/s1600/bella_jack_2011.jpg" style="text-decoration:none"><img style="display: block; margin: 0px auto 10px; text-align: center; cursor: pointer; width: 320px; height: 275px;border-style:none;border-width:0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWAbZNQqdu1WXb4jrElAlROHMLjZGVm0zu_PhxL-n85oXHd2ZhH4R5dTkIyavPU27DaBIxBqx3DMURBP3kgGroQnpk_atOsavwzCidBc4WrmDu6_9JOUfNDnyloPjpQuK8iho7mw/s320/bella_jack_2011.jpg" alt="" id="BLOGGER_PHOTO_ID_5659678823452790386" border="0" /></a><br /><br />The great news is I also have reaped the benefits of a long series of lessons in BI, big data, analytics, reporting, visualizations, problem solving and code writing. So I take these lessons learned into the community and can begin to give back a little. To my fellow community members, to other open source projects, to Pentaho. <br /><br />One project that has caught my attention is the <a href="http://www.openmrs.org">OpenMRS</a> project. OpenMRS is a medical records system platform widely deployed throughout the compromised countries of the world. OpenMRS is open source, and has a thriving community of developers, implementers, users and observers from well established world health organizations. <br /><br />I intend to spend the last quarter of this year investigating integration points between Pentaho tooling and OpenMRS. OpenMRS could use more insight into their data; Pentaho is an excellent set of tools for turning raw data into information. I see synergies here :) <br /><br />Soon, there will be a project page to stay informed if you're interested or would like to participate. I'll post back as soon as I have the leg work done. In the meantime, checkout <a href="http://www.openmrs.org">http://www.openMRS.org</a>. It's a very rational site that gets you up to speed quickly on the project. <br /><br />Cheers & all in His grace, <br />GretchenGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com0tag:blogger.com,1999:blog-31405041.post-40947330883953212322010-11-15T07:53:00.000-08:002010-11-15T07:56:54.113-08:00Jimmy D. takes a look at 2010 and where Pentaho is presentI couldn't resist re-posting this link to James' blog - these numbers are sooo exciting! More so for me since I remember when Pentaho was largely comprised of a small rented space and some beanbag chairs :)<br /><br /><a href="http://jamesdixon.wordpress.com/2010/11/15/150000-installations-year-to-date-for-pentaho/">James takes a look at 2010 and where Pentaho is present</a>.<br /><br />Kindest regards,<br />-GGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com1tag:blogger.com,1999:blog-31405041.post-9903058712409040552010-09-24T13:04:00.000-07:002010-09-27T05:47:26.900-07:00Troubleshooting LocalizationI've been gathering some interesting and useful information when dealing with Pentaho Reporting, Pentaho Metadata and characters not represented in the standard ASCII character set. This bucket of tips will make it into our documentation ASAP, but I thought it prudent to share it with our community even sooner.<br /><br />IMPORTANT CAVEAT: Note that where I specify UTF-8, I am only doing that as a reference encoding... the encoding I speak of in most cases can represent any extended character set; UTF-8 is a common one for multi-national apps, because it represents multi-national characters.<br /><br />Character encoding is key to displaying multi-byte or special characters from character sets outside of the standard ASCII character set. Any text-based files that contain special characters in their glyph form must be encoded as at least UTF-8, or in the character encoding for the language you are attempting to display.<br /><br />The character encoding is significant no matter where these characters reside or travel - if the file or database stores the characters as UTF-8, then Java must handle those characters as UTF-8 and where ever the characters' destination is, be it a browser window or system file, the destination must also render the characters using the same character encoding.<br /><br />So, this means:<br /><br />1. Check any and all TXT or CSV files in an appropriate editor to verify that they are encoded in the correct character encoding. In a pinch, Notepad will do, but if you are seriously dealing with localization, it's in your best interest to invest in or download a good unicode text editor.<br /><br />2. Make sure that your HTML and XML files have a meta tag specifying your chosen as the character set. For example:<br /><br /><span style="font-weight: bold; font-family: courier new;font-size:85%;" ><?xml version="1.0" encoding="UTF-8"?><br /><meta http-equiv="Content-Type" content="text/html; charset="UTF-8" ></span><br /><br />And, if it actually appears in an xhtml document (as suggested by the xml declaration) the content type should probably text/xhtml, and the meta tag should be closed in itself like so:<br /><br /><span style="font-weight: bold; font-family: courier new;font-size:85%;" ><meta http-equiv="Content-Type" content="text/xhtml; charset="UTF-8" /> </span><br /><br />3. The Pentaho BI Server allows you to specify a default encoding in a context parameter in the web.xml file of the webapp. This "default encoding" applies to any XML documents that the server generates. The platform adds an xml prologue to these documents and sets the encoding to that of the BI Platform, which comes from web.xml. By default, the server assumes this is UTF-8. If you want a different default encoding, specify it in the web.xml.<br /><br />4. You also want to make sure that the default encoding that Java (specifically, the JVM that is running the Pentaho application) is using matches the encoding that the Pentaho application is using. We just mentioned that the default encoding for the Pentaho BI Server is UTF-8. So, what is the default encoding for the JVM? The JVM determines it's encoding from the system property "file.encoding". As of Java 1.4.2, this property is available and set from as the default OS locale. However, on Windows systems, this default locale may not exist, so Java makes a best guess. As you can see, knowing what the default encoding is can be a bit nebulous, so we recommend setting the encoding for Java on the command line:<br /><br /><span style="font-weight: bold;font-size:85%;" ><span style="font-family: courier new;">java -Dfile.encoding=UTF-8</span></span><br /><br />You will want to add this command line parameter to any Pentaho application startup script that you are attempting to use internationally. Specifically for the Pentaho BI Server, you would want to set this command line parameter in the start-pentaho.bat | .sh script.<br /><br /><span style="font-weight: bold;">It's important to note that we don't demand UTF-8.</span> We do (for now) demand that whatever file.encoding is specified is what the web.xml context parameter "encoding" says. So - as long as this param says ISO-8859-1 and file.encoding says ISO-8859-1, you're still good.<br /><br />Next, understand that common fonts do not have all of the characters possibly represented in the UTF-8 character set or other extended character sets. So, while your encoding may be correct, if you specify a font that doesn't include the glyph for a multi-byte character, it's likely to render as a square, question mark, or some other seemingly unrelated character.<br /><br />A good test font on Windows systems is "Arial Unicode MS", which is distributed with MS Office and is claimed to have every UTF-8 character glyph available. It's ability to represent every character makes it a good TEST font, but comes with a price - the font is nearly 24 MB. You do not want to recommend this as a production font, since as a best practice guideline, we tell customers to embed their fonts with certain output formats, and this font would equate to staggering overhead in download sizes. The proper recommendation is to tell customers to find the font that best represents the consumer base's languages for that report.<br /><br />So how do we control which fonts and encodings are used in Pentaho Reports? It's a bucket of valuable information I'm attempting to summarize here:<br /><br />First encodings:<br /><br />In Pentaho reports, there are global configuration properties for the different output formats. The global report engine configuration can be found in the Pentaho BI Server installation under the pentaho webapp: pentaho/WEB-INF/classes/classic-engine.properties.<br /><br /><span style="font-weight: bold;font-size:85%;" ><span style="font-family: courier new;">org.pentaho.reporting.engine.classic.core.modules.output.table.html.Encoding=UTF-8</span><br /><span style="font-family: courier new;">org.pentaho.reporting.engine.classic.core.modules.output.pageable.pdf.Encoding=UTF-8</span><br /><span style="font-family: courier new;">org.pentaho.reporting.engine.classic.core.modules.output.table.csv.Encoding=UTF-8</span></span><br /><br />And fonts:<br /><br />1. If you have a metadata model in play, make sure that the metadata concept properties for the font-family are all set to a font that is installed on the server serving up the model and is capable of rendering the special characters you need represented. There is a Base concept (found in the Concept Editor) that has a default font-family that you will want to verify/modify is configured correctly.<br /><br />2. If you are using any of the templates designed for Report Design Wizard or Web Adhoc Query and Reporting, you will want to verify/modify those templates to use a font that is capable of rendering the special characters you need represented. The templates for Report Design Wizard are found in the Report Designer's /templates directory. The templates for WAQR are found in the Pentaho BI Server solutions directory under pentaho-solutions/system/waqr/templates.<br /><br />3. On Windows, what determines whether Pentaho can find an installed font? A few things! First, look in the Windows Control Panel (or modern equivalent), under Fonts... these are the fonts that should be available to the reports generated by the Pentaho BI Server. If for some reason you want to include a font not in the system fonts directory, you can add additional directories of fonts.<br /><br />This is done using a configuration file that you would create and place in the Pentaho webapp WEB-INF/classes directory, which basically creates an override for the configuration file that is found in the libfonts-x.x.x.jar library in the Pentaho webapp primary classpath. The name of the libfont report configuration is libfont.properties. Create this file, place it in the classes directory and add the following configuration property to it, with your font location of course.<br /><br /><span style="font-weight: bold;font-size:85%;" ><span style="font-family: courier new;">org.pentaho.reporting.libraries.fonts.extra-font-dirs.myNewDir=c:/myNewDir/myFonts</span></span><br /><br />Note: There is an open issue with this property that should be fixed with the SUGAR release of the Pentaho BI Server:<a href="http://jira.pentaho.com/browse/PRD-2145"> http://jira.pentaho.com/browse/PRD-2145</a>.<br /><br />4. Our best practice recommendation for ensuring the proper rendering of special characters in PDF reports is:<br />a. Embed the font. This can be accomplished using the following global reporting configuration property: <span style="font-weight: bold;font-size:85%;" ><span style="font-family: courier new;">org.pentaho.reporting.engine.classic.core.modules.output.pageable.pdf.EmbedFonts=true</span></span><br />b. The font should be a TrueType font.<br /><br />Also important to note is that you can confirm what fonts the Pentaho BI Server is aware of, as the reporting engine creates a cache of the fonts it has registered. If you are at all concerned that the server hasn't correctly registered a new font from the system, you can blow away the cache, restart the server, and the reporting engine will load all system fonts anew.<br /><br />The cache exists at <span style="font-family: courier new; font-weight: bold;font-size:85%;" >$HOME/.pentaho/cache/libfonts</span>.Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com3tag:blogger.com,1999:blog-31405041.post-73994906010090282052010-08-20T06:26:00.000-07:002010-08-22T05:11:39.928-07:00Pentaho Architect's Bootcamp Training Now AvailableLast week, I had the pleasure of offering our very first 5-day session of the Pentaho Architect's Bootcamp training, and overall, it was a big success!<br /><br />There comes a point in many advanced deployments of the Pentaho BI Suite where some feature or requirement pushes the boundaries of the out-of-the-box product capabilities. Since the beginning of Pentaho time, we've marched to the beat of "make it possible first, then make it pretty/easy", and it's this scenario where our approach pays big dividends to our customers/community/users. Because the platform/server/tools were built for extensibility, there are numerous places where you can roll up your sleeves and leverage a simple API to implement a customization that suits your specific requirements.<br /><br />The Pentaho Architect's Bootcamp is geared for developers, partners, customers, consultants that are ready to roll up their sleeves and understand the complex problems that are surfacing in large scale BI implementations, and how to extend the Pentaho suite of products to answer far more questions and customizations than the boundaries of the out-of-the-box product.<br /><br />Here's a sampling of some of the questions that are answered during Architect Bootcamp training:<br /><br /><ul><li>How do I integrate my own custom visualizations (maps, charts, gauges, etc) into Pentaho?</li><li>How do I accommodate multiple companies/groups/organizations' data in my solutions, while maintaining each companies/groups/organizations' personal point of view of the data?<br /></li><li>How do I dynamically drive row level security in Pentaho Analysis and Pentaho Metadata?<br /></li><li>How do I integrate Pentaho solution content into my own application?<br /></li><li>How do I customize security across the Pentaho platform and pillars?<br /></li><li>How do I create integrated solutions using ETL, reporting, analysis and metadata to deliver my customer's specific solutions? </li><li>How do I plug my custom content into the Pentaho BI Server? How do I then integrate my custom data/functionality with the Pentaho pillars (ETL, reporting, analysis, metadata, etc)?<br /></li></ul>I really believe this course is a game-changer for Pentaho users and solution developers. I teach the course, so don't take just my word for it. Here's some feedback we received from the first course offering:<br /><br />"We traveled from India to Florida for this course, and we are extremely glad that we did. The information is very valuable."<br /><br />"The presentation was excellent, [I e]specially liked the interactive nature of the class."<br /><br />"All of the lessons uncovered new boundaries; all topics had more priority!"<br /><br />"YEEEEAAAA.. just built my first Pentaho BI Server plugin... Pentaho Architect's Bootcamp RULES!!! #Pentaho @Pentaho"<br /><br />"Day 4 Pentaho Architect's Bootcamp brought so many excitements, I loved it. even though my brain feels like it's going to explode!! #Pentaho"Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com2tag:blogger.com,1999:blog-31405041.post-39090056983907124332009-09-20T11:39:00.001-07:002009-09-20T11:43:24.923-07:00Barcelona Pentaho Community Meetup 2009 PicsWe fly home from Barcelona tomorrow, a lovely vacation and another fantastic Pentaho community gathering. <br /><br />I'll be chatting with many of you online in the near future, and hopefully will see everyone again next year - Vienna is it? Sweet!<br /><br />As promised, <a href="http://www.flickr.com/photos/37034053@N07/sets/72157622418310534/show/">here's my pics</a> :) <br /><br />kindest regards,<br />GGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com5tag:blogger.com,1999:blog-31405041.post-85414814270213610602009-09-19T05:17:00.000-07:002009-09-19T05:30:23.473-07:00Hola from Barcelona, Community Gathering 2009We're close to restarting the sessions for the afternoon, just dropping in to update my fellow Pentaho colleagues on the gathering:) <br /><br />We got a bit of a late start this morning, mostly because the community started the meetup last night, and the socializing lasted into the wee hours for the group. A great time was had by all :) The morning's speakers had great content, covering a variety of topics, from Mozilla statistics presentation with CDF to the latest revision of PAT, the community analysis tool. Roland and Jos are here, our celebrated Pentaho Solutions authors, signing books and presenting the basics of developing custom CDF components.<br /><br />And in traditional European holiday style, we're late in getting the afternoon sessions started, most of our group (around 40 attendees with community and Pentaho included)is still in earnest roadmap discussions at the cervesseria :)<br /><br />Next post, some pics for posterity. <br /><br />buenos tarde!<br />GretchenGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com1tag:blogger.com,1999:blog-31405041.post-85701392248389894262009-09-14T17:10:00.001-07:002009-09-14T17:21:28.479-07:00Pentaho Community, Together in BarcelonaDoug and I arrived in Barcelona this morning, early enough to see some of this beautiful city before the <a href="http://wiki.pentaho.com/display/COM/Pentaho%20Community%20Gathering%20-%20Barcelona%202009">Pentaho Community Meetup </a>this weekend. <br /><br />This is the second annual community meetup, an event that is organized and planned completely by Pentaho community for Pentaho community. No fluffy corporate speak, just a full weekend of Pentaho community developers and users showing off their stuff, talking through their current projects and solutions, and having a few beers and some fun. Many thanks to Tom Barber for planning and sponsoring much of this year's event. <br /><br />We look forward to seeing familiar faces, and new community as well:0) See you all very soon!Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com0tag:blogger.com,1999:blog-31405041.post-80543854754468795832009-08-21T07:46:00.000-07:002009-08-21T08:36:26.910-07:00Thanks Roland and Jos: Pentaho Solutions IN PRINT!I received my pre-ordered copy yesterday of <span style="font-style: italic;">Pentaho Solutions: Business Intelligence and Data Warehousing with Pentaho and MySQL</span>. A huge <span style="font-weight: bold;">congratulations and thank you</span> to Roland Bouman and Jos van Dongen, two long time Pentaho community members who wrote the book.<br /><br />I can' t tell you how excited I am to see this book! For many years, developers and project managers that I've worked with have felt that a book like this one is the missing link to helping customers achieve success with their warehouse and business intelligence strategies. Most books on business intelligence are either too abstract or offer guidance only on select pillars (for example, only reporting solutions), which leave the reader with unfulfilled requirements and no direction for filling in the gaps.<br /><br />With <span style="font-style: italic;">Pentaho Solutions</span>, the reader gets a concrete explanation and best-of-breed Pentaho implementation of ETL, reporting, analysis, dashboarding and data mining solutions; 5 core pillars and their concepts that contribute to a healthy, whole, successful BI strategy and implementation.<br /><br />You can <a href="http://www.amazon.com/Pentaho-Solutions-Business-Intelligence-Warehousing/dp/0470484322/ref=sr_1_1?ie=UTF8&s=books&qid=1250868878&sr=8-1">pre-order your copy at Amazon.com</a> :)<br /><br />Roland and Jos, the team has already sunk their teeth in, and they love what they're reading. Well, the picture says it all :)<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3QB7CG59xn-vHLRET_nILb2NzLBxRWxnzf5cfHW-9VOnolFgtsYP2zWPoIjpvFYHUocNHitVfBJVfhtWTNl-vesRoqvzmqPzKeeTW9kYPNE55eR-yO-Kv48XsP1K0r_aTEkEC6Q/s1600-h/closeup.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 217px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi3QB7CG59xn-vHLRET_nILb2NzLBxRWxnzf5cfHW-9VOnolFgtsYP2zWPoIjpvFYHUocNHitVfBJVfhtWTNl-vesRoqvzmqPzKeeTW9kYPNE55eR-yO-Kv48XsP1K0r_aTEkEC6Q/s400/closeup.jpg" alt="" id="BLOGGER_PHOTO_ID_5372438126187996642" border="0" /></a>Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com7tag:blogger.com,1999:blog-31405041.post-59647628846441447342009-08-12T06:43:00.000-07:002009-08-12T07:32:53.275-07:00Development and Debugging with GWT and JavascriptJava code is the bread and butter of what I do, but as most Java developers know, there is a plethora of good frameworks and technologies that surround Java and provide a means to build very powerful and extendable software.<br /><br />Lately, I've popped back into GWT land and have been dealing with lots of Javascript, both original and generated. I originally introduced myself to GWT building a small volunteer information submittal form for <a href="http://www.brevardrescuemission.org/">Brevard Rescue Mission</a>. This tiny application only scratched the surface of what GWT could do in its earliest stages. The magic that the Pentaho development team have performed with Pentaho Dashboarding is a new level of web-goodness, fully capitalizing on the power of GWT. I've been dabbling in the chart rendering layers of dashboards, and have learned some simple, effective means of making life a bit easier when dealing with debugging and developing Javascript and GWT generated Javascript. I hope you find these tips useful, and it's certainly nice to have them aggregated in one place! I have to give credit for this info to Nick Baker and Mike D'Amour, two of my colleagues at Pentaho. Use this blog post as a starting point for googling the original sources for more details on each tip:)<br /><h4>Helpful tips for Developing and Debugging with GWT & Javascript in General</h4> <p style="font-style: italic;"><b>Limit the user.agent Property</b></p> <p>GWT compiling is resource intensive due to the number of compilations that happen for the browsers supported. At times, you will run out heap space or other resources before the compile can finish (this usually manifests itself as a StackOverflowError). </p><p>The following entries in your *.gwt.xml file can help by only compiling for the single browser you may be testing on:</p> <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent"><pre class="code-xml"><inherits name="com.google.gwt.user.UserAgent"/> <br /> <set-property name="user.agent" value="ie6" /><br /></pre> </div></div><p>Valid values for the <b>user.agent</b> property are: <b>ie6,gecko,gecko1_8,safari,opera</b></p> <p style="font-style: italic;"><b>Limit the gwt.compile.localWorkers</b></p> <p>You can also scale back the number of threads to use for running parallel compilation. While this may hurt performance, you will be able to finish the compilation without running out of resources. This property, <b>gwt.compile.localWorkers</b>, can be added to the compile option in your ant script. </p> <p style="font-style: italic;"><b>Bump the GWT version from 1.6.4 to 1.7.0</b></p> <p>GWT 1.7.0 seems to have resolved many of the compilation resource issues with GWT. </p> <p style="font-style: italic;"><b>GWT Pretty Print Compile</b></p> <p>By default, we obfuscate our GWT compiled Javascript. To debug readable GWT compiled Javascript, compile with pretty print turned on. This property, <b>gwt-style</b>, can be added to the compile option in your ant script. Valid values include OBF, PRETTY, and DETAILED.</p> <h3><a name="DevelopmentandDebuggingwithGWTandJavascript-DebuggingJavascript"></a></h3> <p style="font-style: italic;"><b>IE Javascript Debugging Help</b> </p> <p>If you need to debug Javascript in IE, it is highly recommended that you get IE8. You can install IE8 for the duration of your testing, then uninstall it when you no longer need it, as it has conflicts with GWT. IE8 has a new set of features called <a href="http://msdn.microsoft.com/en-us/library/dd565628%28VS.85%29.aspx" rel="nofollow">Developer Tools</a> that make debugging Javascript very easy. </p> <p style="font-style: italic;"><b>Helpful In Line Javascript Alerts</b></p> <p>You can use the following line of code to send alert windows whereever you like in your Javascript code: </p> <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent"> <pre class="code-java">$wnd.alert(<span class="code-quote">"Hello World"</span>)</pre> </div></div> <p>A good example from Nick: </p> <p>This is literally saving me hours. By adding a line to the the end of the<br />printStackTrace() function you can alert out the stacktraces that normally do nothing when compiled.</p> <p>Open up the gwt script file (xxxxxxxxxxxxxxxxxxxxxxxxx.cache.html) for your particular browser. I find it by seeing what's loaded in firebug.</p> <p>Search for "function $printStackTrace"</p> <p>Add a new line right before the function returns:</p> <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent"> <pre class="code-java">$wnd.alert(msg.impl.string);</pre> </div></div> <p>It should now look like this.</p> <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent"> <pre class="code-java">function $printStackTrace(<span class="code-keyword">this</span>$<span class="code-keyword">static</span>){<br /><span class="code-keyword">var</span> causeMessage, currentCause, msg;<br />msg = $<span class="code-object">StringBuffer</span>(<span class="code-keyword">new</span> <span class="code-object">StringBuffer</span>());<br />currentCause = <span class="code-keyword">this</span>$<span class="code-keyword">static</span>;<br /><span class="code-keyword">while</span> (currentCause) {<br />causeMessage = currentCause.getMessage();<br /><span class="code-keyword">if</span> (currentCause != <span class="code-keyword">this</span>$<span class="code-keyword">static</span>) {<br />msg.impl.string += 'Caused by: ';<br />}<br />$append_4(msg, currentCause.getClass$().typeName);<br />msg.impl.string += ': ';<br />msg.impl.string += causeMessage == <span class="code-keyword">null</span>?'(No exception detail)':causeMessage;<br />msg.impl.string += '\n';<br />currentCause = currentCause.cause;<br />}<br />$wnd.alert(msg.impl.string);<br />}</pre> </div></div> <p style="font-style: italic;"><b>In Code Breakpoints</b></p> <p>Rather than sifting through the script debugger window trying ot figure out where to put a breakpoint, you can use the following line of code to embed a breakpoint:</p> <div class="code panel" style="border-width: 1px;"><div class="codeContent panelContent"> <pre class="code-java">debugger;</pre> </div></div><br />Feel free to comment and send your favorite tricks for working with GWT and Javascript.Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com0tag:blogger.com,1999:blog-31405041.post-69674067677795403862009-05-19T12:33:00.000-07:002009-05-20T15:08:34.907-07:00Pentaho Analysis Tool Integrated as a Pentaho PluginI had the chance this week to play around with the still-under-construction Pentaho plugin architecture in the Citrus code line. The new architecture is just what BI developers have been waiting for: totally flexible with several new ways to integrate with the server, simple to use and allows for building nicely decoupled extensions.<br /><br />With Aaron Phillip's help, I got my head around the new features in less than a day, and had my first plugin written shortly after: The <a href="http://code.google.com/p/pentahoanalysistool/">Pentaho Analysis Tool</a> (PAT) plugin. Before I get into the details of the PAT plugin, let's first talk about the new tools and capabilities in the Pentaho BI server's plugin layer.<br /><br />The plugin architecture consists of several different fun ways you can hook into the Pentaho BI Server, without having to modify server code or disturb the platform deployment. All avenues for leveraging the plugin architecture expect that the necessary files and code will be found in the solutions folders. The layer currently has the following capabilities:<br /><br /><ol><li>Customization of the menu system of the "classic" and more recent PUC (Pentaho User Console) user interfaces</li><li>Customization of various page contents (overlays)</li><li>New types of content to be added to the solution repository and operated upon in the user console</li><li>New Java classes that generate UI pages to be dynamically added to the server</li><li>(new in 3.0) Add your own BI Component to the platform without having to modify system files and paths</li></ol>You can get more details about these features and how they work by reading the documentation <a href="http://wiki.pentaho.com/display/ServerDoc2x/BI+Platform+Plugins+in+V2">here</a>. Aaron also created a <a href="http://wiki.pentaho.com/display/ServerDoc2x/Echo+Plugin+-+a+sample+plugin+for+the+BI+Platform">sample plugin</a> that demonstrates each of these features in a simple plugin mockup, that also is a great template to use for new plugin creation.<br /><br />So that's exactly what I did. Here's a screenshot of the results of my plugin:<br /><br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRfrj3kVaieukQi_2k59G7C3jwisLR43E1I6CfT8UapZvHAKwsQmKP8A-3YdeFaczr7I_WJjnQE8qmYbzJ_5dkH5HY9FvWIrTMhY0UIV6gGfLlovX7iAXHdn3jYAitk07Ostpw0Q/s1600-h/pat-plugin-demo.PNG"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 325px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRfrj3kVaieukQi_2k59G7C3jwisLR43E1I6CfT8UapZvHAKwsQmKP8A-3YdeFaczr7I_WJjnQE8qmYbzJ_5dkH5HY9FvWIrTMhY0UIV6gGfLlovX7iAXHdn3jYAitk07Ostpw0Q/s400/pat-plugin-demo.PNG" alt="" id="BLOGGER_PHOTO_ID_5337906855383613762" border="0" /></a><br />Using the EchoPlugin sample as a guide, I created a new content type (.xpav, for Pentaho Analysis View) which is the first notion of a view definition file for PAT. When you "open" this new content type in PUC, it initializes and launches PAT, which is a separately deployed web application. This is accomplished by creating a new content generator in the plugin that delegates the generation to the PAT webapp. It takes a bit to put it all together: you need a bleeding edge Citrus BI Server download, the latest PAT code and the plugin project. If you are interested in seeing it in action, read the <a href="http://code.google.com/p/pentahoanalysistool/wiki/ServerIntegrationPluginForPAT">integration instructions here</a>.<br /><br />I only took advantage of a couple of the new plugin layer's capabilities in my first plugin. I'm looking forward to playing with the new web services as well as the component that allows my plain old Javabean to look like a BI component automagically. I can foresee great extensions coming fast for the Pentaho BI Server with this new architecture!<br /><br />I've listed some good references for those who are ready to take a look at plugins:<br /><br />Here is the documentation:<br /><a href="http://wiki.pentaho.com/display/ServerDoc2x/BI+Platform+Plugins+in+V2">http://wiki.pentaho.com/display/ServerDoc2x/BI+Platform+Plugins+in+V2</a><br /><br />and here is the Plugin Depot, where you can show others the cool new extensions you've built:<br /><a href="http://wiki.pentaho.com/display/ServerDoc2x/Plugin+Depot">http://wiki.pentaho.com/display/ServerDoc2x/Plugin+Depot</a><br /><br />and if you have questions, comments or problems, or think you may have spotted a bug, chat with some of the developers about it here:<br /><a href="http://forums.pentaho.org/forumdisplay.php?f=73">http://forums.pentaho.org/forumdisplay.php?f=73</a><br /><br />kindest regards,<br />GretchenGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com2tag:blogger.com,1999:blog-31405041.post-68082756477674255562009-05-10T02:02:00.000-07:002009-05-20T15:09:16.123-07:00Maven: The Definitive GuideMy one true nerdy tendency: I like writing technical documentation. It's ironic then (or a bit of a hypocrisy) that I loathe reading it. Chalk it up to my lack of passion for technology. I am a passionate problem solver; technology is a sometimes rewarding, sometimes frustrating means to help effectively get the problem solving job done.<br /><br />Recently, I began reading <a href="http://www.sonatype.com/products/maven/documentation/book-defguide">Maven: The Definitive Guide</a> while getting a pedicure at my local salon (laugh it up guys, I can guess where most of you do your leisure reading). I strongly recommend that any developer approaching Maven for the first (or tenth) time give chapters 3 through 8 a read. This guide is what you hope most technical guides or books would be, but then usually quite early on, they disappoint.<br /><br />The guide starts with a quick, understandable introduction to Maven terminology and concepts, via a short step by step example. As I was reading this from a "make this worth my while" perspective, I had specific use case questions that immediately popped into my head ... and then, I was pleasantly surprised to find the answers in the next few paragraphs.<br /><br />For example, the guide mentions early on that "support for transitive dependencies is one of Maven's most powerful features". To that my questions were "What about conflicts in dependency hierarchies?" and "What about compile time dependencies that I don't want to package?". The rest of the chapter addresses exactly those questions with explanations on dependency exclusions and scoping. Finally. A book that thinks like I do :)<br /><br />A quick summary of the rest of the meat of the guide: chapters 4 through 8 build on the core concepts introduced in 3, with bite size chunks of additional functional explanation in each chapter. The material is presented as a hands on example, building in feature complexity little by little. Chapter 4 shows you how to add new dependencies to your project; 5 introduces simple web application features; chapters 6 and 7 cover multi-module project and enterprise project features. This presentation worked for me on a few different levels:<br /><br /><ol><li>The graduated approach to the introduction of new materials made a large amount of Maven terminology, concepts and finally usage documentation digestible.</li><li>The authors take great care in describing WHY they arrange and refactor the projects as they do, in a very modular fashion. This approach, in practice, lends itself not only to Maven's default conventions, but also to best practices for software project layout. Note that this introduced complexity to the examples that wasn't necessary to explain the features at hand. But the authors bit that bullet in order to present a good and useful way of developing a project.<br /></li><li>The example projects described in the guide are immediately relevant for me. I write Java code. I use Spring. I use Hibernate. This, of course, will not be the case for every reader, but it was a nice bonus for me.<br /></li></ol> So after all of my monologue thoughts, I leave you with a few tids:<br /><br /><ul><li>Read the guide (at least Part 1), then decide how problematic you perceive Maven to be. I know my perception changed dramatically.</li><li>Note that it is a bit outdated, deprecated goals and such (the guide is updated for Maven 2.0.10, I downloaded Maven 2.1). This really didn't distract me at all.</li><li>The chapters I reference are only the tip of the iceberg. Part 2 of the guide includes another 200 reference pages that I have yet to use. I'll let you know how that goes for me:)<br /></li></ul>Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com1tag:blogger.com,1999:blog-31405041.post-40903390386159768562009-04-06T19:18:00.000-07:002009-04-06T20:10:53.523-07:00Every Developer Needs a RoadshowIt's been a few days since returning from the Pentaho Partner Summit. When I get the chance to attend company events, conferences or seminars (the opportunities are rare), I try to sum for myself the benefits of having traveled, gathered and given my attention to the occasion at hand.<br /><br />In the case of the Partner Summit, I thought of several key revelations that came about as a result of the trip. The one that stands at the forefront of my mind: <span style="font-weight: bold;">every developer needs a roadshow</span>.<br /><br />Not as a roadie in a product tour, or as booth Bob at a trade show, but as an interested attendee at an event that showcases whatever you have been working on as a developer. Mind you, this is not a NEW revelation for me; I've had the privilege of representing Hyperion Analyzer at Java One as a developer on that project, and talked to many talented Oracle folks about Pentaho at ODTUG, as well as many other roadshows of my own. I always have come back saying the same thing to my peers - "You guys have to hear what they are saying! You have to feel the excitement!". ( Yes, their was a <span style="font-weight: bold;">maddening</span> amount of energy and excitement around the Pentaho Partner Summit!)<br /><br />The benefits to sending developers out to events that have nothing to do with development and everything to do with the project or product are many. The first benefit that I got excited about in Menlo Park was that I was able to hear how our partners and customers were using Pentaho. I'm committed to focusing on what questions BI users are asking as I re-enter the BI space as a developer, and this was a prime audience. During networking opportunities, partners told stories about customers with big data on Vertica, MySQL, and InfoBright; in intranets, in DMZs, and of course, now in the Cloud. Pentaho partners OpenBI had an attentive and boisterous audience as they discussed <a href="http://bi.cbronline.com/news/nutricia_north_america_deploys_pentaho_bi_suite_on_the_amazon_ec2_130209">their Cloud implementation with client Nutricia</a>.<br /><br />I also really enjoyed having face time with the consumers of the fruits of my previous efforts. I have been away for some time, but I think some parts of the Pentaho projects are still riddled with my signature:) It's OK that many, but not all comments were glowing; that's the point, right? I feel like I understand just a little bit better some of our users' pain points. And that puts me in a better place to alleviate some of that pain. (No worries, Brian and Nick and Domingo ... <span style="font-weight: bold;">Will</span> <span style="font-weight: bold;">and Thomas </span>will get right on native crosstabs!!!)<br /><br />The Partner Summit event gave me the opportunity to lift my head up from the details of our projects and see the field from our partners' perspective. Can I get a lot of the same information surfing the web or hitting the forums? Sure. The perspective is unique though, to spending time with the people who are providing business intelligence solutions in the market. That, I believe, only comes on the road.Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com1tag:blogger.com,1999:blog-31405041.post-29694729133380176102009-04-03T07:28:00.000-07:002009-05-20T15:08:34.907-07:00Pentaho Partner Summit: Menlo Park, CAIt's a fortunate coincidence that I'm in California at the same time the Pentaho Partner Summit is going on. The event is packed, with partners and interests attending from more than 15 countries. The speakers yesterday were really quality, talking about everything from business intelligence in the Cloud to commercial open source business strategy. <br /><br />So I've been able to spend lots of quality time connecting with old friends and colleagues, and have met some new, really talented folks. More details on the event later, but for now, check out some pics of the event <a href="http://www.flickr.com/photos/37034053@N07/sets/72157616302234370/show/">here</a>.<br /><br />Cheers, -GGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com0tag:blogger.com,1999:blog-31405041.post-29922133162120472402009-03-26T23:38:00.001-07:002009-05-20T15:09:16.123-07:00Because I've Got Issues ...I have to hand it to the guys over at <a href="http://www.atlassian.com/">Atlassian</a>, JIRA is a pretty killer app (although I know now that <a href="http://www.amazon.com/Love-Killer-App-Business-Influence/dp/060960922X">Love is the REAL Killer App</a>:) ).<br /><br />I've worked with JIRA for, well, a really long time. I've always worked in companies where you needed to wear many hats, and I'm one of those developers that doesn't get snobby when I'm asked to step outside of my comfy Java home and help out the IT folks. So it's usually me that gets those prize winning projects like migrating forums, internationalizing wikis, or looking for new software to streamline our internal processes. I've spun JIRA around the dance floor several times, XSLT'ing crazy aggregate reports from XML backup formats, writing plugins to support externalizing JIRA data, customizing schemes, changing workflows. Every time, the same epiphany gets me - <span style="font-weight: bold;">JIRA just works, exactly how you would think it should.</span><br /><br />While some who are not so in the know might think, "Gretchen, you simpleton, it's a series of instructions to a processor, of course that's how it works". But those of us who bend software over and around daily know that few apps are actually written with quality, exceptional exception handling and in an intuitive manner that doesn't require years of higher learning and great tolerance for pain to adopt. (This is a very familiar concept particularly for those who use a certain unreasonable operating system).<br /><br />This time, we need to move <a href="http://mondrian.pentaho.org/">Mondrian's</a> tracker issues from their original home on Sourceforge over to JIRA, which is our tool of choice for managing work and issues at Pentaho. With an assist from my other favorite killer app, <a href="http://kettle.pentaho.org/">Kettle</a>, it has been a dreamy couple of days putting together the pieces to get Mondrian's issues to their new home. OK, maybe not dreamy, but certainly pain free.<br /><br />Kudos, my Atlassian friends. You Aussies got it going on.Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com2tag:blogger.com,1999:blog-31405041.post-12519793085564727142009-03-20T17:10:00.000-07:002009-05-20T15:10:43.382-07:00Where have you been??When I say "you", of course I mean me! I silently fell off the radar about 7-8 months ago, and am now finally re-emerging. Well, let me tell you what I've been doing in my "off" time:<br /><br /><ul><li>First, let me introduce you to Jack David, my new baby boy. Doug and I were blessed with this little guy August 15th, 2008. He is the primary cause for my hiatus. I have been loving every minute of being home with my peanuts (Anthony, 13, Bella, 3 and baby Jack)! Alas, the lure of olap cubes, ETL and bug squashing safaris was just too compelling to resist, and it's time to return to the business world.</li></ul><ul><li> <a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjghqD-HWZItqjFGRKGp1FgILlRDw8mnXNGD82OOa8RgaA8Wl6LYEXXGItxH-iciRwk5vdss30kAygtX6RNqkLsbivVBnxZBnrteR72wmmZQOi4AjnRA9z_-4lTww9VaJmAycTLVQ/s1600-h/jack_infant.jpg"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 213px; height: 213px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjghqD-HWZItqjFGRKGp1FgILlRDw8mnXNGD82OOa8RgaA8Wl6LYEXXGItxH-iciRwk5vdss30kAygtX6RNqkLsbivVBnxZBnrteR72wmmZQOi4AjnRA9z_-4lTww9VaJmAycTLVQ/s320/jack_infant.jpg" alt="" id="BLOGGER_PHOTO_ID_5315429874770967266" border="0" /></a></li><li>I also joined the board of directors as secretary of the <a href="http://www.brevardrescuemission.org">Brevard Rescue Mission</a>, a faith based ministry that provides whole-life transition resources for near-homeless moms and their kids. My dear friend Stacia Glavas is the founder, and I have been privileged to be able to handle her communications, marketing and administrative needs in between diaper changes.<br /></li><li> I have been dabbling in a bit of graphic design and found that while I have no natural talent, the Adobe suite allows me to appear semi-talented in creating fun and compelling designs. I have since designed the web site for rescue mission mentioned above, as well as the logo for my daughter's new preschool, several business cards and stationery for friends, and my latest, most daring adventure: skinning a mySpace page for my photographer and friend, <a href="http://www.gioiaphotography.com">Yvette Gioia</a>! Where WILL my curiosities take me?????</li><li>I also have managed to talk a friend of mine into letting me "borrow" his home renovation crew to renovate the entire exterior of our 25 year old home. That's right, I've decided that I also have some sort of qualifications as a contractor. Or possibly just a penchant for frustration and pain, we'll soon see! </li></ul>As you can see, when Doug and I decided that it would be a good idea for me "take some time off" to adjust and organize our growing family, well, I may have misread the "time off" instructions :) I have thoroughly enjoyed my very full, engaged foray into being a stay-at-home mom. I actually just this month officially earned my soccer mom title, as Bella joined the local soccer team. I never knew there was a position for grass-pickers in soccer, but sure enough, my daughter has the exalted title!<br /><br />While I will miss much of the freedom that comes from being my own manager, I miss my career more. So expect to hear from me soon, as I mull over my next career adventure! And of course, immerse myself back into the Pentaho Nation!<br /><br />Kindest regards!<br />GretchieGretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com8tag:blogger.com,1999:blog-31405041.post-3580110456247501452008-06-02T14:00:00.000-07:002009-05-20T15:08:34.907-07:00Pentaho Meetup in Mainz, GermanyPentaho is hosting its first community meetup in Mainz, Germany in a few short weeks. I like the format we've chosen, as it combines informal sessions with lots of food, drink and some touring. And everyone is encouraged to bring their ideas and latest projects for a show and tell.<br /><br />Find out details and register <a href="http://pentaho2008mainz.eventbrite.com/">here</a>. It's sure to be informative and provide an opportunity to make some great Pentaho contacts.Gretchen Moranhttp://www.blogger.com/profile/14898841044841630941noreply@blogger.com1