Thursday, May 8, 2014

Understand cloudfoundry java buildpack code with tomcat example

Cloudfoundry's java buildpack is supporting some popular jvm based applications. This article is oriented to the audiences already with experience of cloudfoundry/heroku buildpack who want to have more understanding of how buildpack and cloudfoundry works internally.

 cf push app -p app.war -b build-pack-url
The above command demonstrates the usage of pushing a war file to cloudfoundry by using a custom buildpack (E.g. However, what exactly happens inside, or how cloudfoundry bootstrap the war file with tomcat? There are three contracts phase that bridge communication between buildpack and cloudfoundry. The three phases are detect, compile and release, which are three ruby shell scripts:

Java buildpack has multiple sub components, while each of them has all of these three phases (E.g. tomcat is one of the sub components, while it contained another layer of sub components).

Detect Phase:

detect phase is to check whether a particular buildpack/component applies to the deployed application. Take the war file example, tomcat applies only when is true:

def supports?
        web_inf? && !JavaBuildpack::Util::JavaMainUtils.main_class(@application)
The above code means, the tomcat applies when the application has a WEB-INF folder and this is not a main class bootstrapped application. 

Compile Phase:

Compile phase would be the major/comprehensive work for a customized buildpack, while it is trying to build a file system on a lxc container. Take the example of our war application and tomcat example. In

      def compile
        download(@version, @uri) { |file| expand file }
        link_to(@application.root.children, root)
        @droplet.additional_libraries << tomcat_datasource_jar if tomcat_datasource_jar.exist?
        @droplet.additional_libraries.link_to web_inf_lib

      def expand(file)
        with_timing "Expanding Tomcat to #{@droplet.sandbox.relative_path_from(@droplet.root)}" do
          FileUtils.mkdir_p @droplet.sandbox
          shell "tar xzf #{file.path} -C #{@droplet.sandbox} --strip 1 --exclude webapps 2>&1"


The above code is all about preparing the tomcat and link the application files, so the application files will be available for the tomcat classpath. Before going to the code, we have to understand the working directory when the above code executes:

. => working directory
.app => @application, contains the extracted war archive
.buildpack/tomcat => @droplet.sandbox
.buildpack/other needed components

Inside compile method:

  1. download method will download tomcat binary file (specified here:, and then extract the archive file to @droplet.sandbox directory. Then copy the resources folder's files to to @droplet.sandbox/conf
  2. Symlink the @droplet.sandbox/webapps/ROOT to .app/
  3. Symlink additional libraries (comes from other component rather than application) to the WEB-INF/lib
Note: All the symlinks use relative path, since when the container deployed to DEA, the absolute paths would be different. 


Release phase is to setup instructions of how to start tomcat. Look at the code in :

      def command
        @droplet.java_opts.add_system_property 'http.port', '$PORT'

          "$PWD/#{(@droplet.sandbox + 'bin/').relative_path_from(@droplet.root)}",
        ].flatten.compact.join(' ')

The above code does:

  1. Add java system properties http.port (referenced in tomcat server.xml) with environment properties ($PORT), this is the port on the DEA bridging to the lxc container already setup when the container was provisioned.
  2. instruction of how to run the tomcat Eg. "./bin/ run"

Thursday, February 20, 2014

中国大陆人在美国职业移民剖析(Green card Crap for Main Land born Chinese Professionals in United States)

I am a poor guy to complain US green card policy toward Chinese borns in my own blog :)
My profile:

  •    Been in US for 7.5 years;
  •    No Green Card;
  •    Get My Master Degree in US;
  •    Worked hard in US for 5.5 years by using H1B;
  •    Paid a lot of taxes;
  •    My employment sponsored companies paid a lot of money to the US government;

What is H1B, H1B is the visa type that the company has to pay the US government to hire foreigners. Problem with H1B:

  •    Expensive that employer do not want to do unless they really need
  •    Available for three years, but tight to the employer. If you change employer, you got to apply a new one, so government can suck money from you again.
  • If you change the work location, you got to apply another one.
  •    The visa stamp used to enter US only valid for 1 year, while india and other countries get 3 years, and it used to be only 3 months. So basically you can not go back to home country, since the lengthy visa application process, unless your boss grant you a very long vacation.

Since H1B sucks, we are hoping to get green card as soon as possible to get out of the nightmare... Then what is about employment based green card?

  • More expensive and more requirement than H1B
  • File perm, then I-140,  I-485. If we change job before I-485, we have to restart the whole process with the new employer.
  •    EB1 is for extraordinary people, like professors with a lot of publication or athletes with gold medal. EB2 is for experienced or advanced degree professionals. EB3 is for the people with basic skills and bachelor degrees. EB5 is for rich people giving money to US government. So most of us, hard workers fall into EB2 and EB3.
  •    US government tries to keep diversity, only makes certain quota for each nation to be qualified to file I-485. Since China is a big nation,  we are competing with each other that makes a long queue. The waiting period is not about one year or two year, it is about 4,5 years up to 10 years.
  •    There are remained quota left every year from other countries, the left over is spilled over to indian and china EB2 which are the only two counties with the above issue. Now they are not talking about diversity, they gave almost all the quotas to india, since indian has a longer queue. So indian got 10 times green cards in 2013 than China.
  •    The most ridiculous thing is that Chinese people maybe so smart and more qualified, there are not many people applying EB3 which is less qualified. EB2 has much longer queue than EB3. There is no any remedy for EB2 China with that.... US government just let those quota wasted and flew into indian EB3.

I understand life can not be fair most times. But as the most powerful country trying to repute itself with freedom and fairness, the system is so ridiculous and unfair for Chinese hard workers. By the way I have classmates went to Australia or Canada got their green card maybe only in 2 years, also friends cursed the Chinese government to apply and get the refugee type green card in 6 months just because US government wants to hear that, I am questioning myself that I may choose a wrong destination at the first point and warning those Chinese American Dreamer's reality.


什么是H1B , H1B是在美国给外国人工作的一种签证类型, 需要雇主支持。H1B的问题:

  •    太贵了,雇主不愿意支持,除非他们真正需要
  •    一次申请三年,但紧跟雇主。如果更换雇主,你必须申请一个新的,所以政府可以再次收费。
  • 如果你改变了工作地点,你一定要申请一个。
  •    用来进入美国的签证只有效期为1年,而印度和其他国家有3年,曾经更甚只有3个月。所以基本上你不能回国,因为漫长的签证申请程序,除非你的老板给予你一个很长的假期。


  •    EB1是有杰出贡献的人,比如教授有很多出版物或运动员获得奥运金牌。 EB2是有经验的或高等学位的专业人士。 EB3是为有基本技能和学士学位。 EB-5是富人给美国政府钱。所以,我们中的大多数,属于EB2和EB3 。
  •    美国政府试图保持多样性,给每个国家一定的配额。因此,中国作为一个大国,我们是互相竞争,有一个长队。等待期是不是一年和两年,大约4,5年,最长10年。
  •    每年有配额从其他国家那里剩下的,会分给印度和中国EB2, 两个仅有以上情况的国家。可是到了剩余配额, 他们再谈论多样性,而是把几乎所有的配额分到印度,因为印度有一个较长的队列。因此,印度比中国多得了10倍的绿卡名额。
  •    最可笑的是,中国人也许太聪明,更合格,最近申EB3的越来越少。 EB2反而比EB3更长 。政府没有为EB2中国给与任何补救....只是让那些配额白白浪费,补给了印度EB3。

我知道生活不总是很公平。但作为最强大的总是宣扬自由和公平的国家,该系统是如此的荒谬和对中国人不公。顺便说一句我有同学去澳大利亚或加拿大拿到绿卡大多2年拿到绿卡,也有在美国骂中国政府就6个月拿到绿卡的,只是因为美国政府希望听到的这些。 我质问自己,我可能选择错误的目的地, 也警告那些有美国梦的中国人可怕的现实。

By the way, google translate sucks.

Tuesday, February 18, 2014

Use maven aspectj plugin to look inside of Spring

Along with spring development with more and more features, the code is more complex. Sometimes, developer wants to have some customized tracing information that spring debug logging does not provide. Aspectj, as an instrumental tool could help in this field, while maven aspectj plugin is able to quickly setup the project and get the instrumentation done. In this article, I will demo how to use maven aspectj plugin to instrument spring code.

The parent pom has two modules, aspectj and spring, while aspectj is the one to instrument the spring project.

The spring project is to use a Main class to bootstrap an application context. The pom.xml underneath spring enables a maven exec plugin to execute Main class
The aspectj project has only one Aspect ( that intercept all the DefaultListableBeanFactory.registerBeanDefinition method execution and print out the bean names and class names.
public class AspectL {

    public void defineEntryPoint() {

    public void aaa(JoinPoint joinPoint) {
     String beanName = (String) joinPoint.getArgs()[0];
     BeanDefinition beanDefinition = (BeanDefinition) joinPoint.getArgs()[1];
        System.out.println("Register BeanName" + beanName + " with: " + beanDefinition.getBeanClassName());
The apsectj pom.xml enable the aspectj plugin to do the load time weaving, since the spring class is already compiled into the jar. In order to do the load time weaving, the plugin has to include the binary jar as a dependency under <weaveDependency>
To Run the example:
In parent module: mvn clean install
In aspect module: mvn exec:java would print all the tracing information.....

Register BeanNamepropertyBean with: org.shaozhen.sample.PropertyFactoryBean
Register BeanNametest with: org.shaozhen.sample.TestFactoryBean
Register BeanNamecomponents with: org.shaozhen.sample.component.Components
Register BeanNameorg.springframework.context.annotation.internalConfigurationAnnotationProcessor with: org.springframework.context.annotation.ConfigurationClassPostProcessor
Register BeanNameorg.springframework.context.annotation.internalAutowiredAnnotationProcessor with: org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor
Register BeanNameorg.springframework.context.annotation.internalRequiredAnnotationProcessor with: org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor
Register BeanNameorg.springframework.context.annotation.internalCommonAnnotationProcessor with: org.springframework.context.annotation.CommonAnnotationBeanPostProcessor
Register BeanNameorg.springframework.context.annotation.ConfigurationClassPostProcessor.importAwareProcessor with: org.springframework.context.annotation.ConfigurationClassPostProcessor$ImportAwareBeanPostProcessor

The code can be accessed from here:

Saturday, January 11, 2014

Java 8 lambda with combiner

Lambda expression and functional programming are advanced features provided by Java 8. Here I would like to explain why and how the combiner is used in a practical example.

public class Person {

    String name;
    int age;
    Sex sex;

        Person p1 = new Person("Shaozhen", 5, male);
        Person p2 = new Person("Mike", 10, male);
        Person p3 = new Person("Darin", 20, male);
        Person p4 = new Person("Alice", 25, female);
        Person p5 = new Person("Mark", 30, male);

        List<person> persons = new ArrayList<person>();

The above code initializes a person array with five persons that person has name, age and sex attributes. Now the problem is to use lambda/function [1] to find the average age of male persons. In order to show the usage of combiner we are using a custom IntCustomer.
class Averager implements IntConsumer
    private int total = 0;
    private int count = 0;

    public double average() {
        return count > 0 ? ((double) total)/count : 0;

    public void accept(int i) {
        total += i; count++;
    public void combine(Averager other) {

Then we can use lambda to filter the array and calculate the average. -> p.getSex() == male).map(p -> p.getAge()).collect(Averager::new, Averager::accept, Averager::combine).average()
The above code is to get the collection's stream (laziness since the filter operation, we do not want the mapping function to apply to all the items). collect method basically evaluates the lazy expression with arguments of method references. Averager::new, how to initialize the return object, Averager::accept, how to accumulate the item for each item. The implementation of the Average is straightforward. Whenever scan a new item, person's age accumulated into the total variable and count variable plus 1. Then total divided by average resulted the average age.

 However, assuming the persons array is a huge array and the computer has multiple cores. How the program takes advantages of parallel processing of multi core and advance the processing speed? Since Java 7 there is a fork/join [2] paradigm that split the workload recursively and aggregate the results. Java 8 Stream provides a way to process the collections in parallel with fork/join.
    persons.parallelStream().filter(p -> p.getSex() == male).map(p -> p.getAge()).collect(Averager::new, Averager::accept, Averager::combine).average();
The above code uses parallelStream to mark the processing way as fork/join. However, After running the program, we got a wrong result which is 5. The reason is that we have not implemented the combiner method. So when combine method applies, it always do nothing and conserves the left argument result, which is the first male person's age as 5. Now let's implement the combine method and get the correct result.
    public void combine(Averager other) {
        total +=;
        count += other.count;

Along with above discussion, the keyword of Java 8 is about lambda, function programming, stream, fork/join. The combiner here is very similar with map/reduce's [3] reduce step.


  1. Lambda tutorial
  2. fork/join tutorial
  3. Map Reduce

Tuesday, October 15, 2013

Cloudfoundry makes continuous integration easy

Cloudfoundry works as a PAAS that makes dev opts work much easier for continuous integration.

Without a good PAAS, setting up a stable and reusable build environment requires a lot of efforts. For example, the build requires a tomcat container setup, the Jenkin slave machine needs to avoid the tomcat port conflict, the web application has other service dependencies for testing, needs profiles to separate the different environments and injecting to your pom.xml and so on.

I will describe an experiment of using cloudfoundry and maven to do a clean integration test.


  • Register a cloudfoundry account with a trial account or you can create a light weight cloudfoundry by using (Recommended since you can get better understanding cloudfoundry principles and components)

Setup the POM (example as with cloudfoundry branch):
  • Add cloudfoundry credentials to your maven settings.xml

  • Add cloudfoundry maven plugin (cf-maven-plugin) for pre-integration-test and post-integration-test

The above plugin has two executions, one is set as pre-integration-test life cycle with a push goal to set up a tomcat environment. The application can be configured with appname, url and so on. What does it actually happen during push inside of cloud foundry? In short story, the cf client side upload the war and give that to the cloudfoundry (specified in the target url), and cloudfoundry is smart enough to create a ready tomcat container based on the build packs, url(it has internal DNS server) and app name. For the interest readers, please refer cloudfoundry architecture and how-applications-are-staged

  • Add maven fail safe plugin to run your integration tests.

Since we parameterized the app name, we could just pass in the url to our test and the test knows which url to perform test. Because of the existence of router and dns server in cloudfoundry, we would not worry port configuration/collision for tomcat, since every jenkin build we just pass in different app name (E.g. build number), then the build executes concurrently.

To run the build: mvn clean install -Dapp=test

Cloudfoundry is also flexible to integrate with other application containers (Controlled by buildpack). Another feature called services (As your application dependencies E.g. mysql, redis....) could be configured and deployed to cloudfoundry environment so your application could be easily bind to during push with minimum configuration. For the readers interested with cloudfoundry deployment and release management, please refer bosh

Tuesday, July 30, 2013

Junit multiple users functionalities with TestRule interface

Recently I was trying to do some performance testing integrated with build. There are some use cases that I want to simulate multi users(Threads) behavior inside my junit test cases to have multiple threads executing the tests concurrently.

Then TestRule, a built in Junit interface which brought my attention, is easy and powerful to customize your test code behaviors.

As I understand, the TestRule is an interface which developer can inject into test classes, which is similar to the aspectj Around point cut.

The code below implements TestRule and return an inner class as CustomStatement that injects some code before and after the testing code (base.evaluate())

public class SimpleTestRule implements TestRule{
 public Statement apply(Statement base, Description description) {
  return new CustomStatement(base);
 private class CustomStatement extends Statement{
  private Statement base;
  public CustomStatement(Statement base) {
   this.base = base;

  public void evaluate() throws Throwable {
   //Before test execution
   System.out.println("Before Test Running");
   //Test execution
   //After test execution
   System.out.println("After Test Running");

Use the above Custom Test Rule
public class SimpleTestRuleTest {
 public TestRule rule = new SimpleTestRule();
 public void test(){
  System.out.println("My test is running");

Code execution Result:

Before Test Running
My test is running
After Test Running

Multiple User TestRule

After understanding the above simple test rule, we can start create a more complicate rule to support multiple threading of tests.

The following code wrapped the test execution code (base.evaluate()) into an independent thread.

public class MultiUserTestRule implements TestRule{
 private final int numOfUsers;
 private final ExecutorService executor;
 public MultiUserTestRule(int poolSize, int numOfUsers){
  this.numOfUsers = numOfUsers;
  executor = Executors.newFixedThreadPool(poolSize);

 public Statement apply(Statement base, Description description) {
  return new CustomStatement(base);
 private class CustomStatement extends Statement{
  private Statement base;
  public CustomStatement(Statement base) {
   this.base = base;

  public void evaluate() throws Throwable {
   for(int i=0; i< numOfUsers; i++){
    executor.execute(new Thread(new Runnable() {
     public void run() {
      try {
      } catch (Throwable e) {
       throw new IllegalArgumentException(e);

Use the above test Rule
public class MultiUserTestRuleTest {
  * The thread pool size is 10
  * There are 5 users run the tests
 public TestRule rule = new MultiUserTestRule(10, 5);
 public void test(){
  System.out.println(Thread.currentThread().getName() + " is running the test");
The above test case is trying to print out each current thread name for demo purpose.
The result is:
pool-1-thread-1 is running the test
pool-1-thread-4 is running the test
pool-1-thread-5 is running the test
pool-1-thread-2 is running the test
pool-1-thread-3 is running the test

All the above code can be found here:

Tuesday, July 16, 2013

Spring MVC testing with session scoped beans

Spring MVC test is a new feature from Spring 3.2 that could help easy function/integration testing with the controller without bring up a real server. However, when controller autowiring  a session scoped bean and some controller request mapping methods have operations on the bean, how to make assertion on those operations becomes tricky. Here I will propose an approach to test against it.

Assume there is a session scoped bean configured as below:

@Scope(value= "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
public class SessionScopeBean {
 private String value;

 public String getValue() {
  return value;

 public void setValue(String value) {
  this.value = value;
There is a controller autowired with the session scoped bean
public class HomeController {
 private SessionScopeBean sessionBean;
 @RequestMapping(value = "/sessionScope/{value}")
 public String test(@PathVariable String value){
  return "home";
The above controller modify the value field of a session scope bean, then how would we retrieve that and assert the appropriate value in the session. Couple of tips here:

  • Spring saves the session scope bean as a session attribute "scopedTarget.${idOfTheBean}"
  • Spring mvc test provides request().sessionAttribute("attr", hamcrestMatcher) to assert the session
  • hamcrest matcher HasPropertyWithValue can be used to inject into the above expression

public class ControllerTest {
 private MockMvc mockMvc;
 private WebApplicationContext wac;
 public void setup(){
  this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build();
 public void test() throws Exception{
   andExpect(request().sessionAttribute("scopedTarget.sessionScopeBean", hasProperty("value", is("abc"))));
The above tests shows how the session attribute being asserted