Sunday, January 24, 2010

Tutorial - Integrating Terracotta EHCache for Hibernate with Spring PetClinic

Now updated for Terracotta 3.3!

With Terracotta's latest 3.2 release, configuring 2nd Level Cache for Hibernate is incredibly simple. What's more is that using the included Hibernate console, we can identify the hot spots in our application, and eliminate unwanted database activity.

In this blog I will show you how to setup and install Terracotta EHCache for Hibernate into the venerable Spring PetClinic application. After that we will identify where database reads can be converted to cache reads. By the end of the tutorial, we will convert all database reads into cache reads, demonstrating 100% offload of the database.

What you'll need:

  • Java 1.5 or greater
  • Ant
  • Tomcat
Let's get started.

Step 1 - Download and unzip the Spring PetClinic Application

Go to the Spring download site and download Spring 2.5.6 SEC01 with dependencies. Of course you can download other versions, but 2.5.6 SEC01 is the version I used for this tutorial.

Unzip the installation into $SPRING_HOME.

Step 2 - Build PetClinic for use with Hibernate

The PetClinic application is location in $SPRING_HOME/samples/petclinic. Cd to this directory.

The first thing we need to do is setup PetClinic to use Hibernate. To do so, update the web.xml file located in src/war/WEB-INF/web.xml. Locate the section called contextConfigLocation and update it to look like the following (comment out the JDBC config file and un-comment out the hibernate config file):
<context-param>
<param-name>contextConfigLocation</param-name>
<param-value>/WEB-INF/applicationContext-hibernate.xml</param-value>
<!-- <param-value>/WEB-INF/applicationContext-jdbc.xml</param-value>
<param-value>/WEB-INF/applicationContext-jpa.xml</param-value< -->
<!-- To use the JPA variant above, you will need to enable Spring load-time
weaving in your server environment. See PetClinic's readme and/or
Spring's JPA documentation for information on how to do this. -->
</context-param>
Now, build the application:
$ ant warfile
...
BUILD SUCCESSFUL
Total time: 20 seconds

Step 3 - Start PetClinic

First, start the HSQLDB database:
$ cd db/hsqldb
$ ./server.sh
Next, copy the WAR file to your Tomcat's webapps directory:
$ cp dist/petclinic.war $TOMCAT_HOME/webapps
And start Tomcat:
$ $TOMCAT_HOME/bin/catalina.sh start
...

You should now be able to access PetClinic at http://localhost:8080/petclinic and see the home screen:


Step 4 - Install Terracotta

Download Terracotta from the http://www.terracotta.org/download. Unzip and install into $TC_HOME

Step 5 - Configure Spring PetClinic Application to use Ehcache as a Hibernate Second Level Cache

First, the Terracotta Ehcache libraries must be copied to the WEB-INF/lib directory so they can be compiled into the PetClinic WAR file. The easiest way to do this is to update the build.xml file that comes with Spring PetClinic. Add two properties to the top of the build file (make sure to replace PATH_TO_YOUR_TERRACOTTA with your actual path):
<property name="tc.home" value="PATH_TO_YOUR_TERRACOTTA" />
<property name="ehcache.lib" value="${tc.home}/ehcache/lib" />
Then, update the lib files section - locate the section that starts with the comment "copy Tomcat META-INF", adding the Terracotta hibernate jars like so:
<!-- copy Tomcat META-INF -->
<copy todir="${weblib.dir}" preservelastmodified="true">
<fileset dir="${tc.home}/lib">
<include name="terracotta-toolkit*.jar">
</include>
</fileset>
<fileset dir="${ehcache.lib}">
<include name="ehcache*.jar">
</include>
</fileset>
...
</copy>
The Sprint PetClinic application by default includes Ehcache libraries, but we are setting up the application to use the latest version of Ehcache. Therefore you should also remove the sections in this file that copy the ehcache libraries to the WAR file:
<!--
<fileset dir="${spring.root}/lib/ehcache">
<include name="ehcache*.jar"/>
</fileset>
-->
Note: there are two such entries, make sure to update them both!

You'll also need to update the applicationContext.xml file, located in war/WEB-INF/applicationContext-hibernate.xml. This is the Spring configuration file, and it contains the properties that configure Hibernate. We need to update the 2nd level cache provider settings so Hibernate will use Terracotta. Update the HibernateSessionFactory like so:
<!-- Hibernate SessionFactory -->
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" ref="dataSource" mappingresources="petclinic.hbm.xml">
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop>
<prop key="hibernate.cache.use_second_level_cache">true</prop>
<prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.EhCacheRegionFactory</prop>
</props>
</property>
<property name="eventListeners">
<map>
<entry key="merge">
<bean class="org.springframework.orm.hibernate3.support.IdTransferringMergeEventListener">
</bean></entry>
</map>
</property>
</bean>
As written, the Spring PetClinic application entities are not configured for caching. Each entity configuration in Hibernate requires caching to be explicitly enabled before caching is available. To enable caching, open the petclinic.hbm.xml file located in src/petclinic.hbm.xml and add caching entries to each entity definition. Here are the entity definitions I used:
<class name="org.springframework.samples.petclinic.Vet" table="vets">
<cache usage="read-write"/>
....
</class>
<class name="org.springframework.samples.petclinic.Specialty" table="specialties">
<cache usage="read-only"/>
....
</class>
<class name="org.springframework.samples.petclinic.Owner" table="owners">
<cache usage="read-write"/>
....
</class>
<class name="org.springframework.samples.petclinic.Pet" table="pets">
<cache usage="read-write"/>
....
</class>
<class name="org.springframework.samples.petclinic.PetType" table="types">
<cache usage="read-only"/>
....
</class>
<class name="org.springframework.samples.petclinic.Visit" table="visits">
<cache usage="read-write"/>
....
</class>

Step 6 - Rebuild and re-deploy

Now that the PetClinic app is configured for use with Terracotta and Hibernate 2nd level cache, re-build the war file and re-deploy it to your tomcat installation:
$ ant warfile
...
BUILD SUCCESSFUL
Total time: 20 seconds
$ $TOMCAT_HOME/bin/catalina.sh stop
$ rm -rf $TOMCAT_HOME/webapps/petclinic
$ cp dist/petclinic.war $TOMCAT_HOME/webapps

Note: Do not start Tomcat yet!

Step 7 - Start Terracotta

During the integration process you will want to have the Terracotta Developer Console running at all times. It will help you diagnose problems, and provide detailed statistics about Hibernate usage. Start it now:
$ $TC_HOME/bin/dev-console.sh




If the "Connect Automatically" box is not checked, check it now.

Now you will need to start the Terracotta server:
$ $TC_HOME/bin/start-tc-server.sh
Your Developer Console should connect to the server and you should now see:



Step 8 - Start Tomcat and PetClinic with Caching

$ $TOMCAT_HOME/bin/catalina.sh start


Now access the PetClinic app again at http://localhost:8080/petclinic. Take a look at the Developer Console. It should indicate one client has connected - this is the PetClinic app. Your Developer Console should look like this:



If it does not, please review Steps 1-8.

It's now time to review our application. Click the "Hibernate" entry in the Developer Console. You should now see the PetClinic entities listed. As you access entities in the application, the statistics in the Developer Console will reflect that access. Select "Refresh" to see an up to date view of the statistics.


Step 9 - Analyze Cache Performance

Now we can use the Developer Console to monitor the cache performance.

Select the "Second-Level Cache" button from the upper right hand corner. To monitor performance in real-time you can use either the Overview tab or the Statistics tab.

In the Spring PetClinic app, select "Find owner" from the main menu. You should see the following screen:



Now find all owners by pressing the "FIND OWNERS" button. Without any parameters in the query box, this will search for and display all owners. The results should look like this:



If you are using the Overview tab, you will see the cache behavior in real-time. Refresh the find owners page (re-send the form if your browser asks) and then quickly switch to the Developer Console. You should see something like the following:



Try switching tabs to the Statistics tab and do the same thing. Notice that in the statistics tab, you get a history of the recent activity. After finding the owners again your screen should like the following:



Notice that there is a graph labeled "DB SQL Execution Rate". This graph shows exactly how many SQL statements are being sent to the database from Terracotta. This is a feature unique to Terracotta, because Terracotta adds special instrumentation into Hibernate that allows it to detect the DB SQL statements being sent to the DB. Let's use this feature to eliminate all database reads.

Step 10 - Eliminate all database activity

By using the Developer Console we can see that we've eliminated almost all of the activity that we can reasonably expect. Of course we cannot eliminate the initial cache load, as the data must get into the cache somehow. So what are the little blips of database activity occurring after the initial load?

The application activity that is causing the database activity is that of repeatedly listing all of the owners. Is it possible that this is causing our database activity - even though we've already cached all of the owners?

Indeed, this is the case. To generate the list of owners, Hibernate must issue a query to the database. Once the result set is generated (a list of entity ids) all of the reads can subsequently be satisfied from cache. We can confirm that this is the case using the Developer Console - If you suspect that it can show you statistics on queries then you are starting to understand how the Developer Console works and what it can do for you!

To see Hibernate query statistics, select the "Hibernate" button in the upper-right corner. Then select the "Queries" tab. This will show you the queries that are being performed by Hibernate. If we do so now, sure enough, just as we expected, we can see an entry for our Owner query:



If we refresh our owner list again, and press "Refresh" on the query statistics, we should see the number in the Executions column increase by one. In my case, you see the number 5 in the previous screenshot. After reloading the owner query and refreshing the statistics page, I see the number 6.

Step 11 - Enable Query Caching

Is there a way to eliminate these database queries? Yes, there is, it is called the Query Cache. To learn more about the query cache I highly recommend you read these resources:

In short, the query cache can cache the results of a query, meaning that Hibernate doesn't have to go back to the database for every "list owners" request we make. Does it make sense to turn this caching on? Not always, as Alex points out (make sure you read his article).

Because our goal today is to eliminate all database reads, we are going to turn on query caching. Whether you should do so or not for your application will depend on many factors, so make sure you fully understand the role of the query cache and how to use it.

To enable query caching, we have to do two things:
  • Enable query caching in the hibernate config file
  • Enable caching for each query

This isn't very different than how we enabled caching for entities, with one exception. To enable caching for each query, unfortunately we have to modify the code where the query is created.
So, to enable query caching in the hibernate config file, edit the applicationContext-hibernate.xml file again and add the hibernate.cache.use_query_cache prop:
<!-- Hibernate SessionFactory -->
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" ref="dataSource" mappingresources="petclinic.hbm.xml">
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${hibernate.dialect}</prop>
<prop key="hibernate.show_sql">${hibernate.show_sql}</prop>
<prop key="hibernate.generate_statistics">${hibernate.generate_statistics}</prop>
<prop key="hibernate.cache.use_second_level_cache">true</prop>
<prop key="hibernate.cache.provider_class">org.terracotta.hibernate.TerracottaHibernateCacheProvider</prop>
<prop key="hibernate.cache.use_query_cache">true</prop>
</props>
</property>
<property name="eventListeners">
<map>
<entry key="merge">
<bean class="org.springframework.orm.hibernate3.support.IdTransferringMergeEventListener">
</bean></entry>
</map>
</property>
</bean>
Now, edit the source code. There is just one file to edit, located in src/org/springframework/samples/petclinic/hibernate/HibernateClinic.java. This file contains the definitions for all the queries. Edit it to add "setCacheable(true)" to each query like so:

public class HibernateClinic implements Clinic {
@Autowired
private SessionFactory sessionFactory;
@Transactional(readOnly = true)
@SuppressWarnings("unchecked")
public Collection >vet> getVets() {
return sessionFactory.getCurrentSession().createQuery("from Vet vet order by vet.lastName, vet.firstName").setCacheable(true).list();
}

@Transactional(readOnly = true)
@SuppressWarnings("unchecked")
public Collection <pettype> getPetTypes() {
return sessionFactory.getCurrentSession().createQuery("from PetType type order by type.name").setCacheable(true).list();
}

public Collection<owner> findOwners(String lastName) {
return sessionFactory.getCurrentSession().createQuery("from Owner owner where owner.lastName like :lastName").setString("lastName", lastName + "%").setCacheable(true).list();
}

Now re-build and re-deploy.

Look in the Developer Console under the Second Level Cache/Statistics graph. There should be 0 DB executions (except the initial load).



Conclusion


I hope this tutorial was useful. Using caching in your application can improve performance and scale dramatically. If you'd like to review some performance numbers that Terracotta has published I recommend you visit Terracotta's EHCache site and look in the right hand margin for a whitepaper that shows the results of performance testing the Spring PetClinic application.

Tuesday, January 19, 2010

XTP Processing using a Distributed SEDA Grid built with Coherence

I just finished a talk at the NYC Coherence SIG January 14, 2010.


The topic highlights the concepts Grid Dynamics used to build a high throughput scalable XTP engine for processing telecommunications billing events. The framework employs a distributed SEDA architecture built on top of a Coherence In-Memory-Data-Grid back-end.

Sunday, January 03, 2010

Characterizing Enterprise Systems using the CAP theorem

In mid 2000, Eric A. Brewer, a former founder of Inktomi and chief scientist at Yahoo! and now currently a professor of Computer Science at U.C. Berkeley, presented a keynote speech at the ACM Symposium on the Principles of Distributed Computing. In his seminal speech, Brewer described a theorem based on research and observations he made called the CAP theorem.


The CAP theorem is based on the observation that a distributed system is governed by three fundamental characteristics:

  1. Consistency
  2. Availability
  3. Partition tolerance


CAP is a is a useful tool in understanding the behavior of a distributed system. It states that given the three fundamental characteristics of a distributed computing system, you may have any two but never all three. It's usefulness in designing and building distributed systems cannot be overstated. So, how can we use the knowledge of this theorem to our advantage?


As the designer of an enterprise scale system, CAP provides us with a framework to make decisions regarding which tradeoffs must be made in our own implementations. But CAP allows us not only to understand the systems we are building more precisely, but also provides a framework by which we can classify all systems. It is thus an invaluable tool when evaluating the systems that we rely on day in and day out in our enterprise systems.


As an example, let's analyze the traditional Relational Database Management System (RDBMS). The RDBMS, arguably one of the most successful enterprise technologies in history, has been around in its current form for nearly 40 years! The primary reason for the staying power of the RDBMS lies with its ability to provide consistency. A consistent system is most easily understood and reasoned about, and therefore most readily adopted (thus explaining the popularity of the RDBMS). But what of the other properties? An RDBMS provides availability, but only when there is connectivity between the client accessing the RDBMS and the RDBMS itself. Thus it can be said that the RDBMS does not provide partition tolerance - if a partition arises between the client and the RDBMS, the system will not be able to function properly. In summary, we can thus characterize the RDBMS as a CA system due to the fact that it provides Consistency and Availability but not Partition tolerance.


As useful as this mechanism is, we can go one step further. Given that a system will always lack one of C, A, or P, it is common that mature systems have evolved a means of partially recovering the lost CAP characteristic. In the case of our RDBMS example, there are several well-known approaches that can be employed to compensate for the lack of Partition tolerance. One of these approaches is commonly referred to as master/slave replication. In this scheme, database writes are directed to a specially designated system, or master. Data from the master is then replicated to one or more additional, or slave, systems. If the master is offline then reads may be failed over to any one of the surviving read replica slaves.


Thus, in addition to characterizing systems by their CAP traits, we can further characterize them by identifying the recovery mechanism(s) they provide for the lacking CAP trait. In the remainder of this article I classify a number of popular systems in use today in enterprise, and non-enterprise, distributed systems. These systems are:

  • RDBMS
  • Amazon Dynamo
  • Terracotta
  • Oracle Coherence
  • GigaSpaces
  • Cassandra
  • CouchDB
  • Voldemort
  • Google BigTable

RDBMS

CAP: CA

Recovery Mechanisms: Master/Slave replication, Sharding


RDBMS systems are fundamentally about providing availability and consistency of data. The gold standard of RDMBS updates, referred to as ACID, governs the way in which consistent updates are recorded and persisted.


Various means of improving RDBMS performance are available in commercial systems. Due to the maturity of the RDBMS, these mechanisms are well understood. For example, the consistency conflicting reads and writes during the course of a transaction is referred to as isolation levels. The commonly accepted set of isolation levels, in decreasing order of consistency (and increasing order of performance), are:

  • SERIALIZABLE
  • REPEATABLE READ
  • READ COMMITTED
  • READ UNCOMMITTED


Recovery mechanisms:

  • Master/Slave replication: A single master accepts writes, data is replicated to slaves. Data read from slaves may be slightly out of date, trading off some amount of Consistency to provide Partition tolerance.
  • Sharding: While not strictly limited to database systems, sharding is commonly used in conjunction with a database system. Sharding refers to the practice of separating the entire application into vertical slices which are 100% independent of one another. Once completed, sharding isolates failures of any one system into "swimlanes" and is one example of "fault isolative architectures", thus limiting the impact of any single failure or related sets of failure to only one portion of an application. Sharding provides some measure of resistance to Partition tolerance by assuming that failures occur on a small enough scale to be isolated to a single shard, leaving the remaining shards operational.

Amazon Dynamo

CAP: AP

Recovery: Read-repair, application hooks


Amazon's Dynamo is a private system designed and used solely by Amazon. Dynamo was intentionally designed to provide Availability and Partitioning tolerance, but not Consistency. This appearance of Amazon's Dynamo was very nearly as seminal as the introduction of the CAP theorem itself. Due to the dominance of the database, until Amazon introduced Dynamo to the world, it was very nearly a mainstay that enterprise systems must provide Consistency and therefore the tradeoffs available lie in the remaining two CAP characteristics of Availability or Partition tolerance.


Examining the requirements for Amazon's Dynamo, it's clear why the designers chose to buck the trend: Amazon's business model depends heavily on availability. Even the simplest of estimates pegs the losses Amazon could suffer from an outage at a minimum of $30,000 per minute. Given that Amazon's growth has nearly quadrupled since these estimates were made (in 2008), we can estimate that in 2010 Amazon may lose as much as $100,000 per minute. Put simply, availability matters a lot at Amazon. Furthermore, the fallacies of distributed computing already know tells us that the network is unreliable, and so therefore we must expect partitions to occur on a regular and frequent basis. So it's a simple matter then to see that the only remaining CAP characteristic left to sacrifice is Consistency.


Dynamo provides an eventual consistency model, where all nodes will eventually get all updates during their lifetime.

Given a system composed of N nodes, the eventual consistency model is tuned as follows:

  • Setting the number of writes needed for a successful write operation (W).
  • Setting the number of reads needed for a successful write operation (R).

Setting W = N or R = N will give you a quorum-like system with strict consistency and no partition tolerance. Setting W <>


Given that different nodes may have different versions of the same value (i.e., a value may have been written during a node downtime), Dynamo needs to:

  • Track versions and resolve conflicts.
  • Propagate new values.
Versioning is implemented by using vector clocks: each value is associated to a list of (node, value) pairs updated every time a specific node writes that value; they can be used to determine causal ordering and branching. Conflict resolution is done during reads (read repair), eventually merging values with diverging vector clocks and writing back.

New values are propagated by using hinted handoff and merkle trees.


Terracotta

CAP: CA

Recovery: Quorum vote, majority partition survival


Terracotta is a Java-based distributed computing platform that provides high level features such as Caching via EHCache and highly available scheduling via Quartz. Additional support for Hibernate second level caching allows architects to easily adopt Terracotta in a standard JEE architecture that relies on Spring, Hibernate and an RDBMS.


Terracotta's architecture is similar to that of a database. Clients connect to one or more servers arranged into a "Server Array" layer. Updates are always Consistent in a Terracotta cluster, and availability is guaranteed so long as no partitions exist in the topology.


Recovery Mechanisms:

  • Quorum: Upon failure of a single server, a backup server may take over once it has received enough votes from cluster members to elect itself the new master.
  • Majority partition survival: In the event of a catastrophic partition involving many members of a Terracotta cluster that divides the cluster into one or more non-communicative partitions, the partition with a majority of remaining nodes is allowed to continue after a pre-configured period of time elapses.

Oracle Coherence

CAP: CA

Recovery: Partitioning, Read-replicas


Oracle Coherence is an in-memory Java data-grid and caching framework. It's main architectural component is its ability to provide consistency (hence it's name). All data in Oracle Coherence has at most one home. Data may be replicated to a configurable number of additional members in the cluster. When a system fails, replica systems vote on who becomes the new home for data that was homed in the failed system.


Coherence provides data-grid features that facilitate processing data using map-reduce like techniques (execute the work on the data, instead of moving data to the processing) and a host of distributed computing patterns are available in the incubator patterns.


Recovery Mechanism(s):

  • Data Partitioning: At a granular level, any one piece of data exhibits CA properties, that is to say that reads and writes of data in Coherence are always Consistent. As long as no partitions exist, data is available, meaning that for a particular piece of data, Coherence is not Partition tolerant. However, similar to the database sharding mechanism data may be partitioned across the cluster nodes, meaning that a partition will only affect a sub-set of all data.
  • Read-replication: Coherence caches may be configured in varying topologies. When a Coherence cache is configured in read-replicated mode it exhibits CA. Data is consistent but writes block in the face of partitions.


GigaSpaces

CAP: CA or AP, depending on the replication scheme chosen

Recovery: Per-key data partitioning


GigaSpaces is a Java based application server that is fundamentally built around the notion of Space-based computing, an idea derived from Tuple Spaces which was the core foundation of the Linda programming system.


GigaSpaces provides high availability of data placed in the space by means of synchronous and an asynchronous replication scheme.


In a synchronous replication mode, GigaSpaces provides Consistency and Availability. The system is consistent and available, but can not tolerate partitions. In an asynchronous replication mode, GigaSpaces provides Availability and Partition tolerance. The system is available for reads and writes, but is only eventually consistent (after the asynchronous replication completes).


Recovery Mechanism(s):

  • Per-key data partitioning - GigaSpaces supports a mode called Partitioned-Sync2Backup. This allows for data to be partitioned based on a key to lower the risk of shared fate and to provide a synchronous copy for recovery.


Apache Cassandra

CAP: AP

Recovery: Partitioning, Read-repair


Apache Cassandra was developed by Facebook, using the same principles as Amazon's Dynamo, thus it is no surprise that Cassandra's CAP traits are the same as Dynamo's.


For read-recovery, Cassandra uses simple timestamps instead of the more difficult vector clocks implementation used by Amazon's Dynamo.


Recovery Mechanism(s):

  • Partitioning
  • Read-repair


Apache CouchDB

CAP: AP

Recovery:


Apache CouchDB is a document oriented database that is written in Erlang.


Voldemort

Link: http://project-voldemort.com

CAP: AP

Recovery: Configurable read-repair


Project Voldemort is an open-source distributed key value store developed by LinkedIn and released as open source in February of 2009. Voldemort exhibits similar characteristics as Amazon's Dynamo. It uses vector clocks for version detection and read-repair.


Recovery Mechanism(s):

  • Read-repair with versioning using vector clocks.


Google BigTable

CAP: CA

Recovery:


Google's BigTable is, according to Wikipedia, "a sparse, distributed multi-dimensional sorted map", sharing characteristics of both row-oriented and column-oriented databases." It relies on Google File System (GFS) for data replication.



This blog post is still a work in progress. There are many other systems that are worthwhile to evaluate, among them Terrastore, Erlang based frameworks like Mnesia, and message based systems such as Scala Actors, and Akka, among others. If you would like to see something else, please mention it in the comments.

Thanks also go to Sergio Bossa for assistance in writing this blog post.