All posts by tristan.tarrant

Cassandra CacheStore now in Infinispan trunk

Since I have been accepted as an Infinispan contributor, I have committed my first complete implementation of the Cassandra CacheStore to Infinispan’s trunk. This means that it will be included in Infinispan 5.0 whenever that will be released.

In the meantime I have migrated all the code into my repository and have released a 0.0.2 version which can be used with the current Infinispan 4.1.x and 4.2.x. The Maven dependency (if you use my repository) is:


        net.dataforte.infinispan
	infinispan-cachestore-cassandra
	0.0.2

I would be very grateful if you could test it in your environment.

I am also working on adding multiple host addresses and automatic ring discovery to my Cassandra Connection Pool.

Share

Mirroring multiple Eclipse update sites

When you have lots of developers needing a homogeneous set of Eclipse plugins, it makes sense to provide a local mirror of the necessary update sites for easy provisioning. I have written a simple shell script for this which I put in crontab for periodic execution:

#!/bin/sh
ECLIPSE_HOME=/opt/eclipse
REPO_HOME=/var/www/p2repo
JAVA_OPTS=”-Dhttp.proxyHost=proxy.acme.com -Dhttp.proxyPort=3128″
UPDATE_SITES=”http://download.jboss.org/jbosstools/updates/development/ http://subclipse.tigris.org/update_1.6.x/ http://m2eclipse.sonatype.org/sites/m2e http://m2eclipse.sonatype.org/sites/m2e-extras http://directory.apache.org/studio/update/1.x/ http://eclipse.jcraft.com/ http://java.decompiler.free.fr/jd-eclipse/update”
for i in $UPDATE_SITES; do
echo $i
java $JAVA_OPTS -jar $ECLIPSE_HOME/plugins/org.eclipse.equinox.launcher_*.jar -application org.eclipse.update.core.standaloneUpdate -command mirror -from $i -to $REPO_HOME
done

Share

Infinispan Cassandra CacheStore

Since my previous post about the Cassandra Connection Pool, I have been progressing on another project: a Cassandra CacheStore for Infinispan.
I have published on my personal SVN repository the initial source for this CacheStore, get it at:

http://dataforte.dyndns.org/svn/dataforte/infinispan-cachestore-cassandra/trunk/

If you use Maven, add my repository to your settings.xml or to your repository manager (Nexus, Artifactory, etc):

http://www.dataforte.net/listing/maven/releases/

which contains all required dependencies.

See the tests available under src/test for examples on how to setup an Infinispan cache backed up to a Cassandra database and how a Lucene InfinispanDirectory can take advantage of both systems.

I am working on rearranging things for potential inclusion in the main Infinispan package, so some things may change (package names). Also key expiration is not as efficient as I would like, but release early, release often is good practice, so there 🙂

Share

Cassandra Connection Pool

For a project of mine I’ve had need for a robust, decent connection pool for Cassandra. Since I use the thrift APIs directly without other layers, I decided to use Tomcat 7’s excellent jdbc-pool as a base. The result is my cassandra-connection-pool which at the moment may be fetched from my github repository at http://github.com/tristantarrant/cassandra-connection-pool
It supports most of the features of its parent implementation: connection validation on creation, release and idle, abandoned connection detection, etc.
Let me know if you use it.

Share

Catwalk Model Processor

I have just released the code for Catwalk a Java Annotation Processor for automatically generating derived domain model classes.

Supposing you have a JPA Entity which you want to pass to a servlet stripped of certain private / internal properties, adding a few annotations to the getters you want to expose allows Catwalk to generate a new class with only those properties and convenience methods for converting between the two types of objects.

The project is still missing a few essentials before being useful, such as documentation, proper examples and being uploaded to a Maven repo.

In the following example, a TestModel class is converted to a WebTestModel class:

TestModel.java

package net.dataforte.test.model;

@Model(pattern = "Web#", classPackage = "net.dataforte.test.webmodel")
public class TestModel {
	String s;
	int i;

	@ModelAttribute
	public String getS() {
		return s;
	}

	public void setS(String s) {
		this.s = s;
	}

	public int getI() {
		return i;
	}

	public void setI(int i) {
		this.i = i;
	}
}

WebTestModel.java

package net.dataforte.test.webmodel;

public class WebTestModel {

	private java.lang.String s;

	public WebTestModel() {}

	public WebTestModel(net.dataforte.test.model.TestModel src) {
		this.fromTestModel(src);
	}

	java.lang.String getS() {
		return s;
	}

	void setS(java.lang.String s) {
		this.s = s;
	}

	public WebTestModel fromTestModel(net.dataforte.test.model.TestModel src) {
		this.s = src.getS();
		return this;
	}

	public net.dataforte.test.model.TestModel toTestModel() {
		net.dataforte.test.model.TestModel that = new net.dataforte.test.model.TestModel();
		that.setS(this.s);
		return that;
	}

}
Share

Java and Large Memory Pages on Linux

Recently I helped configure a system for an application running under Tomcat on Linux with very large memory requirements: a minimum heap of 6GB with a maximum of 11GB. The JVM was initially configured to use the Parallel garbage collector. With this configuration garbage collection of the “Young Generation” was fine, but the “Old Generation” GC was taking over 30 seconds (and blocking all other threads while doing this). We looked into enabling Large Memory Pages, a feature of modern CPUs which allow memory-hungry applications to allocate memory in 2MB chunks instead of the standard 4KB. Documentation on the web on how to do this exactly is sparse and missing some details we ran into. Here’s the sequence of steps we had to take:

  1. configure the kernel’s maximum shared memory to span the whole address space (via the kernel.shmmax and kernel.shmall parameters)
  2. configure the kernel’s allocated large memory pages (via the vm.nr_hugepages parameter)
  3. configure the user limits to ensure that the user running Tomcat can allocate the necessary memory (via the maxlock parameter)
  4. ensure that PAM applies the security limits to users who “login” via su and sudo
  5. configure the JVM for Large Memory Pages

Add the following lines to /etc/sysctl.conf and use sysctl -p to reload the changes into the running kernel although I recommend rebooting the system so that the Large Memory pages can be properly allocated (they have to be contiguous).

# Maximum size of a shared memory segment (in bytes)
kernel.shmmax=17179869184
# Maximum total size of all shared memory segments (in pages of 4KB)
kernel.shmall=3145728
# Number of allocated Large Memory Pages (each one takes up 2MB)
vm.nr_hugepages=6144

Edit /etc/security/limits.conf so that the user running the Java application can lock the correct amount of memory.

tomcat soft memlock 12884901888
tomcat hard memlock 12884901888

Edit /etc/pam.d/su and /etc/pam.d/sudo and ensure that they contain the following line so that the above memory limits are applied:

session required pam_limits.so

Next add the relevant options to the JVM’s command-line:

-XX:+UseLargePages -Xmx11g -Xms6g

Share

Eclipse crashes with Lucid

Update: it seems like a lot of people find this useful. With the final release of Ubuntu 10.04 Lucid Lynx, xulrunner-1.9.2 is in the main repository. Because of this, just uninstall xulrunner-1.9.1 and you’re done.

I like living on the edge: I use Ubuntu Lucid and Eclipse 3.6M5 on my x86-64 notebook. Since the 25th of February, Eclipse started crashing when closing the content assist popup window. I narrowed it down to the upgrade to libcairo2-1.8.10-2ubuntu1. This causes a RenderBadPicture X Error when the documentation popup that appears on the right of the possible completions is closed. Downgrading to libcairo2-1.8.8 solved the problem, but it seems that the real problem lies within xulrunner 1.9.1 which SWT uses to render the docs. The Mozilla bug is https://bugzilla.mozilla.org/show_bug.cgi?id=522635. A quick workaround is to set the GRE_HOME environment variable to some meaningless path (e.g. /tmp) before launching Eclipse. Another possible solution is to install xulrunner-1.9.2 (only available from the mozilla-daily-ppa). I have filed a bug report un Launchpad asking for an official upgrade to xulrunner 1.9.2.

Share

XBMC+NVIDIA+DynamicTwinView = Wrong refresh rate

I was wondering why I got tearing with XBMC even though I had enabled vsync and why XBMC was reporting a refresh rate of 50Hz for my screen instead of 75Hz as it should have been. It turns out that NVIDIA’s DynamicTwinView feature reports a fake refresh rate (that of the MetaMode which encompasses all of the screens involved in the TwinView) which may not be the one for the physical device. As I am not using TwinView I added the following Option to my xorg.conf Device section:

Option "DynamicTwinView" "False"
Share

HTPC photos

I promised over a month ago some photos of my custom HTPC, so here they are:

The front of the box showing the 20x2 LCD, the slimline Optiarc BC-5600S Blu-Ray/DVDRW and two of the USB ports
The front of the box showing the 20x2 LCD, the slimline Optiarc BC-5600S Blu-Ray/DVDRW and two of the USB ports
The untidy guts of the box showing the Zotac ION ITX motherboard on the right, the 700GB Western Digital HDD and the Slimline BD/DVDRW Drive. There are two 1GB sticks of DDR2 RAM under the optical drive. Notice the unused horizontal slots in the case at the rear of the box
The untidy guts of the box showing the Zotac ION ITX motherboard on the right, the 700GB Western Digital HDD and the Slimline BD/DVDRW Drive. There are two 1GB sticks of DDR2 RAM under the optical drive. Notice the unused horizontal slots in the case at the rear of the box
The back of the case showing the multitude of connections: PS/2 mouse, USB Wi-Fi Siemens stick, optical S/PDIF output, HDMI, VGA, analog audio out and the DC power cable (the brick is out of view)
The back of the case showing the multitude of connections: PS/2 mouse, USB Wi-Fi Siemens stick, optical S/PDIF output, HDMI, VGA, analog audio out and the DC power cable (the brick is out of view)

The box is running Ubuntu 9.10 (aka Karmic Koala) and XBMC which takes advantage of the ION’s PureVideo capabilities via VDPAU.

Share