Minborg

Minborg
Minborg

Friday, August 12, 2016

The Need for Speed in Web Applications

Often, web applications are slower than we would like them to be. Companies like Google and Facebook have in-house solutions for speeding up their applications. There is a need for a third-party tool that we developers can use to achieve something similar, without needing to spend hours to optimize the code and the database.

The last couple of months, I have been busy contributing to a project that speeds up the response times and also makes it much easier to develop the back end parts of web applications. The project is named Ext Speeder and was developed together with Sencha, the company behind Ext JS. It allows Ext JS developers to speed up their data grids more than 10 times.

Automatically Generated Back End 

All back end developers know that there is a lot of work to connect a front end application to the back end. Some of the tasks are to model the database, secure connections, parse http command, deserialize parameters, manage database connections, convert into SQL, optimize queries, parse database response, format into JSON, write XML config, deploy in Java EE and finally verify all the code. Depending on project this may take a couple of days or even several weeks

We have created a tool that connects to an existing database and extracts the schema model so that back end code can be generated automatically. Indeed, there is no need for writing a single line of back end code in most situations. If we have special requirements (like adding our own models to the existing data), it is easy to add or modify the code that was generated by the Ext Speeder tool.

Checkout this 1.5 minute video by a guy that holds the current world-record in developing and deploying a Sencha Ext JS application with the tool (he does it in less than 1.5 minutes). Can it be done faster?

How Much Faster Does it Get?

When an Ext Speeder application is started, it can pull in data we have selected into an in-JVM-memory store so that data can be accessed and processed much faster. Data is also organized in a column oriented way. Thanks to that, sorting and filtering on a column can be done in nanoseconds rather than in seconds. Because Ext Speeder can use an off-heap storage engine, we can pull in an almost unlimited amount of data. An Ext Speeder application can easily handle one hundred million elements. Ext Speeder can also refresh the in-memory data periodically in the background, so that we will see the latest data in the database.

Giving fair benchmark figures is always hard. To give an idea, we took an existing public database containing a large number of medical doctors in the US with over 40 million elements and stored them in a standard MySQL server (version 5.6.29) with indexes added to the relevant columns. We then compared typical web access patterns containing sorting, filtering and paging with Ext Speeder versus using the MySQL database directly. On average, Ext Speeder was more than 100 times faster.

Figure 1, Latency, Ext Speeder vs MySQL standard (less is better) for one of the sub-test

We used the following setup for the tests:

Property
Value
Computer
Lenovio
CPU
Intel i7-4720HQ CPU @ 2.60GHz
RAM
16 GB
OS
Windows 10
Database
MySQL Version 5.6.29 std. install
Test Tool
Apache JMeter

What is the real-life gain with such speed improvements, in reality? Well, imagine a web user that is used to waiting some seconds between each interaction. Now that person could suddenly get immediate feed back and the site would feel much more responsive. Check out this 1 minute video to see the difference.


The REST API

Request

The tool is using a standard REST API for querying. If we have a database table named 'doctor', then we can query it like this:

http://localhost:4567/MyProject/doctor?
    callback=cb&
    start=10&
    limit=100&
    sort=[{"property":"last_name","direction":"ASC"}]&
    filter=[{"property":"graduation_year","operator":"lt","value":"1970"}]

This will retrieve doctors that graduated before 1970 sorted by last name starting at the 10:th such doctors and limiting the result to at most 100 physicians. On my laptop, the round trip REST call was completed in less than 0.04 seconds (A human would not notice that kind of delay).

The "sort" attribute can contain several columns so that sorting can be done in many levels.

The 'filter' attribute can be composed of several filters so than several conditions can be applied. We can use the operators Equals ("eq"), Not Equals ("ne"), Less Than ("lt"), Less or Equal ("le"), Greater Than ("gt"), Greater or Equals ("ge") and Contains ("like").

Response

The response to the request above is a JSON string that contains all the matching doctors in the given order. This also makes it very easy to interface with other languages like Java Script, PHP or C#. This is an example of how a response might look like (with limit=2 to reduce size):

cb({
  "total": 50457,
  "data": [
    {
      "id": 1687657,
      "last_name": "AARON",
      "first_name": "JOHN",
      "suffix": "",
      "gender": "M",
      "credential": "",
      "medical_school_name": "OTHER",
      "graduation_year": 1966,
      "primary_speciality": "GASTROENTEROLOGY",
      "second_speciality": "",
      "organization": "ATLANTIC COAST GASTROENTEROLOGY ASSOCIATES",
      "organization_dba_name": "",
      "street_address_1": "1944 STATE ROUTE 33",
      "street_address_2": "",
      "supress_street_address_2": "Y",
      "city": "NEPTUNE",
      "state": "NJ",
      "zip_code": "077534863",
      "real_zips_ext_id": "RZ-US-07753",
      "claims_based_aff_CCN_1": "310038",
      "claims_based_aff_LBN_1": "ROBERT WOOD JOHNSON UNIVERSITY HOSPITAL, INC"
    },
    {
      "id": 258680,
      "last_name": "ABADEE",
      "first_name": "RASHEED",
      "suffix": "",
      "gender": "F",
      "credential": "MD",
      "medical_school_name": "OTHER",
      "graduation_year": 1968,
      "primary_speciality": "PHYSICAL MEDICINE AND REHABILITATION",
      "second_speciality": "",
      "organization": "",
      "organization_dba_name": "",
      "street_address_1": "1300 W 7TH ST",
      "street_address_2": "SAN PEDRO HOSP",
      "supress_street_address_2": "N",
      "city": "SAN PEDRO",
      "state": "CA",
      "zip_code": "90732",
      "real_zips_ext_id": "RZ-US-90732",
      "claims_based_aff_CCN_1": "050078",
      "claims_based_aff_LBN_1": "PROVIDENCE HEALTH SYSTEM - SOUTHERN CALIFORINA"
    }
  ]
});

The 'total' property in the response indicates how many of the rows actually matched the filter (in total) so that data grids can set the appropriate scroll bar location and size.

How Does it Look Like?

This is how an Ext JS data grid application might look like with an Ext Speeder back end:



Deployment

Deployment is easy. Either deploy an Ext Speeder application as a stand-alone application (just run its main Java method) or deploy it as a Java EE application (upload the self-contained WAR file to the server). Any Java EE server would do. For example Tomcat, Glassfish or Oracle WebLogic. This way, the application can automatically benefit from features such as company security policies, authentication, encryption, load balancing etc.

Try it for Free!

Go to http://www.extspeeder.com/ to request a free trial. The User's Manual can be found on the same web page. I would be happy to get feedback on the experience or improvement suggestions as a comment on this post.



Monday, April 11, 2016

Java 8: Use Smart Streams with Your Database in 2 Minutes

Streaming with Speedment

Duke and Spire Mapping Streams.

Back in the ancient 90s, we Java developers had to struggle with making our database application work properly. There was a lot of coding, debugging and tweaking. Still, the applications often blew up right in our faces to our ever increasing agony. Things gradually improved over time with better language, JDBC and framework support. I'd like to think that we developers also improved, but there are different opinions on that...

When Java 8 finally arrived, some colleges and I started an open-source project to take the whole Java/DB issue one step further by leveraging on Java 8's stream library, so that database tables could be viewed as pure Java 8 streams. Speedment was born! Wow, now we can do type-safe database applications without having to write SQL-code any more.

Speedment connects to existing databases and generate Java code. We can then use the generated code to conveniently query the database using standard Java 8 streams. With the new version 2.3 hitting the shelves just recently, we can even do parallel query streams!

Let's take some examples assuming we have the following database table defined:
 
CREATE TABLE `user` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `username` varchar(45) NOT NULL,
  `firstName` varchar(45) DEFAULT NULL,
  `lastName` varchar(45) DEFAULT NULL,
  `email` varchar(45) NOT NULL,
  `password` varchar(45) NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `email_UNIQUE` (`email`),
  UNIQUE KEY `username_UNIQUE` (`username`)
) ENGINE=InnoDB;

Speedment is free for the open-source databases MySQL, PostgreSQL and MariaDB. There is also support for commercial databases, like Oracle, as an enterprise add-on feature.

Examples


Querying

Select all users with a ".com" mail address and print them:
        users.stream()
            .filter(EMAIL.endsWith(".com"))
            .forEach(System.out::println);
Select users where the first name is either "Adam" or "Cecilia" and sort them in username order, then take the first 10 of those and extract the email address and print it.
        users.stream()
            .filter(FIRST_NAME.in("Adam", "Cecilia"))
            .sorted(USERNAME.comparator())
            .limit(10)
            .map(User::getEmail)
            .forEach(System.out::println);

Creating Database Content

Create a new user and persist it in the database:
        users.newEmptyEntity()
            .setUsername("thorshammer")
            .setEmail("mastergamer@castle.com")
            .setPassword("uE8%3KwB0!")
            .persist();

Updating Database Content

Find the user with id = 10 and update the password:
        users.stream()
            .filter(ID.equal(10))
            .map(u -> u.setPassword("pA6#nLaX1Z"))
            .forEach(User::update); 

Removing Database Content

Remove the user with id = 100:
        users.stream()
            .filter(ID.equal(100))
            .forEach(User::remove);

New Cool Stuff: Parallel Queries

Do some kind of expensive operation in parallel for users with 10_000 <= id < 20_000
        users.stream()
            .parallel()
            .filter(ID.between(10_000, 20_000))
            .forEach(expensiveOperation());

Setup

Setup code for the examples above:
       final Speedment speedment = new JavapotApplication()
            .withPassword("javapot") // Replace with your real DB password
            .build();

        final Manager<User> users = speedment.managerOf(User.class);

Get Started with Speedment


Read more here on GitHub on how to get started with Speedment.


Sunday, March 13, 2016

Put Your Java 8 Method References to Work

Method References

As we all know by now, we can use Method References, like String::isEmpty, in Java 8 to reference a method that is being used when we, for example, stream over elements. Take a look at this code snippet:
    Stream.of("A", "", "B").filter(Stream::isEmpty).count();
which will produce the result 1 (because there is just one empty element in the stream). But, if we want to filter out non-empty strings, we need to write .filter(s -> !s.isEmpty()) which is a Lambda. Clearly, there is an annoying asymmetry here. We can use a method reference, but not its negation. We can write predicate.negate() but we cannot write Stream::isEmpty.negate() or !Stream::isEmpty.

Why is that? It's because a Method Reference is not a Lambda or a Functional Interface. However, a Method Reference can be resolved to one or several Functional Interfaces using Java's type inference. Our example String::isEmpty can, in fact, be resolved to at least:

  • Predicate<String>
  • Function<String, Boolean>

So, we need to somehow resolve all the potential ambiguities and decide which Functional Interface we want to turn the Method Reference into. Read this post and find out how to partially fix this issue. I have used code presented here in the open-source project Speedment that makes databases look like Java 8 Streams. Feel free to try Speedment out.

Speedment  also contains predicate builders that allow you to use functions like Entity.NAME::isEmpty and Entity.NAME::isNotEmpty directly.

Resolving Method References

The problem can be partially fixed by introducing some "plumbing" in the form of static methods that takes a Method Reference and returns it as a view of a specific Functional Interface. Consider this short static method:
    public static <T> Predicate<T> as(Predicate<T> predicate) {
        return predicate;
    }
Now, if we import that method statically we can, in fact, use a Method Reference more easily as shown in this short example:
    Stream.of("A", "", "B").filter(as(String::isEmpty).negate()).count();
The code will return 2 which is the number of non-empty elements in the stream. This is a step forward in terms of Method Reference usage. Another benefit is that this solution allows us to compose our predicates more easily like this:
.filter(as(String::isEmpty).negate().and("A"::equals))

Resolving All Method References

But there is still a problem we have to resolve. We can not simply start creating a lot of static as() functions, because a Method Reference might be resolvable to several type of potential as() methods in the same way as listed in the beginning of this post. So, a better approach is to append the Functional Interface type name to each static method, allowing us to programmatically select a particular Method Reference to Functional Interface conversion method. Here is a utility class that allows Method References to be converted to any matching Functional Interface that resides in the standard Java package java.util.function.

Pull down the latest version directly from GitHub here

import java.util.function.*;

/**
 *
 * @author Per Minborg
 */
public class FunctionCastUtil {

    public static <T, U> BiConsumer<T, U> asBiConsumer(BiConsumer<T, U> biConsumer) {
        return biConsumer;
    }

    public static <T, U, R> BiFunction<T, U, R> asBiFunction(BiFunction<T, U, R> biFunction) {
        return biFunction;
    }

    public static <T> BinaryOperator<T> asBinaryOperator(BinaryOperator<T> binaryOperator) {
        return binaryOperator;
    }

    public static <T, U> BiPredicate<T, U> asBiPredicate(BiPredicate<T, U> biPredicate) {
        return biPredicate;
    }

    public static BooleanSupplier asBooleanSupplier(BooleanSupplier booleanSupplier) {
        return booleanSupplier;
    }

    public static <T> Consumer<T> asConsumer(Consumer<T> consumer) {
        return consumer;
    }

    public static DoubleBinaryOperator asDoubleBinaryOperator(DoubleBinaryOperator doubleBinaryOperator) {
        return doubleBinaryOperator;
    }

    public static DoubleConsumer asDoubleConsumer(DoubleConsumer doubleConsumer) {
        return doubleConsumer;
    }

    public static <R> DoubleFunction<R> asDoubleFunction(DoubleFunction<R> doubleFunction) {
        return doubleFunction;
    }

    public static DoublePredicate asDoublePredicate(DoublePredicate doublePredicate) {
        return doublePredicate;
    }

    public static DoubleToIntFunction asDoubleToIntFunction(DoubleToIntFunction doubleToIntFunctiontem) {
        return doubleToIntFunctiontem;
    }

    public static DoubleToLongFunction asDoubleToLongFunction(DoubleToLongFunction doubleToLongFunction) {
        return doubleToLongFunction;
    }

    public static DoubleUnaryOperator asDoubleUnaryOperator(DoubleUnaryOperator doubleUnaryOperator) {
        return doubleUnaryOperator;
    }

    public static <T, R> Function<T, R> asFunction(Function<T, R> function) {
        return function;
    }

    public static IntBinaryOperator asIntBinaryOperator(IntBinaryOperator intBinaryOperator) {
        return intBinaryOperator;
    }

    public static IntConsumer asIntConsumer(IntConsumer intConsumer) {
        return intConsumer;
    }

    public static <R> IntFunction<R> asIntFunction(IntFunction<R> intFunction) {
        return intFunction;
    }

    public static IntPredicate asIntPredicate(IntPredicate intPredicate) {
        return intPredicate;
    }

    public static IntSupplier asIntSupplier(IntSupplier intSupplier) {
        return intSupplier;
    }

    public static IntToDoubleFunction asIntToDoubleFunction(IntToDoubleFunction intToDoubleFunction) {
        return intToDoubleFunction;
    }

    public static IntToLongFunction asIntToLongFunction(IntToLongFunction intToLongFunction) {
        return intToLongFunction;
    }

    public static IntUnaryOperator asIntUnaryOperator(IntUnaryOperator intUnaryOperator) {
        return intUnaryOperator;
    }

    public static LongBinaryOperator asLongBinaryOperator(LongBinaryOperator longBinaryOperator) {
        return longBinaryOperator;
    }

    public static LongConsumer asLongConsumer(LongConsumer longConsumer) {
        return longConsumer;
    }

    public static <R> LongFunction<R> asLongFunction(LongFunction<R> longFunction) {
        return longFunction;
    }

    public static LongPredicate asLongPredicate(LongPredicate longPredicate) {
        return longPredicate;
    }

    public static <T> LongSupplier asLongSupplier(LongSupplier longSupplier) {
        return longSupplier;
    }

    public static LongToDoubleFunction asLongToDoubleFunction(LongToDoubleFunction longToDoubleFunction) {
        return longToDoubleFunction;
    }

    public static LongToIntFunction asLongToIntFunction(LongToIntFunction longToIntFunction) {
        return longToIntFunction;
    }

    public static LongUnaryOperator asLongUnaryOperator(LongUnaryOperator longUnaryOperator) {
        return longUnaryOperator;
    }

    public static <T> ObjDoubleConsumer<T> asObjDoubleConsumer(ObjDoubleConsumer<T> objDoubleConsumer) {
        return objDoubleConsumer;
    }

    public static <T> ObjIntConsumer<T> asObjIntConsumer(ObjIntConsumer<T> objIntConsumer) {
        return objIntConsumer;
    }

    public static <T> ObjLongConsumer<T> asObjLongConsumer(ObjLongConsumer<T> objLongConsumer) {
        return objLongConsumer;
    }

    public static <T> Predicate<T> asPredicate(Predicate<T> predicate) {
        return predicate;
    }

    public static <T> Supplier<T> asSupplier(Supplier<T> supplier) {
        return supplier;
    }

    public static <T, U> ToDoubleBiFunction<T, U> asToDoubleBiFunction(ToDoubleBiFunction<T, U> toDoubleBiFunction) {
        return toDoubleBiFunction;
    }

    public static <T> ToDoubleFunction<T> asToDoubleFunction(ToDoubleFunction<T> toDoubleFunction) {
        return toDoubleFunction;
    }

    public static <T, U> ToIntBiFunction<T, U> asToIntBiFunction(ToIntBiFunction<T, U> toIntBiFunction) {
        return toIntBiFunction;
    }

    public static <T> ToIntFunction<T> asToIntFunction(ToIntFunction<T> ioIntFunction) {
        return ioIntFunction;
    }

    public static <T, U> ToLongBiFunction<T, U> asToLongBiFunction(ToLongBiFunction<T, U> toLongBiFunction) {
        return toLongBiFunction;
    }

    public static <T> ToLongFunction<T> asToLongFunction(ToLongFunction<T> toLongFunction) {
        return toLongFunction;
    }

    public static <T> UnaryOperator<T> asUnaryOperator(UnaryOperator<T> unaryOperator) {
        return unaryOperator;
    }

    private FunctionCastUtil() {
    }

}
So after we have imported the relevant methods statically, we can write:
    Stream.of("A", "", "B").filter(asPredicate(String::isEmpty).negate()).count();

An Even Better Solution

It would be even better if all the Functional Interfaces themselves contained a static method that could take a suitable Method Reference and turn it into a typed Functional Interface. For example, the standard Java Predicate Functional Interface would then look like this:
@FunctionalInterface
public interface Predicate<T> {

    boolean test(T t);

    default Predicate<T> and(Predicate<? super T> other) {...}

    default Predicate<T> negate() {...}

    default Predicate<T> or(Predicate<? super T> other) {...}

    static <T> Predicate<T> isEqual(Object targetRef) {...}

    // New proposed support method to return a 
    // Predicate view of a Functional Reference 
    public static <T> Predicate<T> of(Predicate<T> predicate) {
        return predicate;
    }
    
}
This would allow us to write:
    Stream.of("A", "", "B").filter(Predicate.of(String::isEmpty).negate()).count();
Which I personally think looks good!

Contact your nearest Open JDK developer and make the proposal!

Wednesday, March 9, 2016

JEP 286: Java Will Get Local Variable Type Inference

The Proposal

A new JEP has been proposed by Brian Goetz that would simplify the writing of Java applications. The proposal is to introduce Local Variable Type Inference, a feature that exists in many other languages today. With this new proposed feature we could write code like:

var list = new ArrayList<String>();  // infers ArrayList<string>
var stream = list.stream();          // infers Stream<String>

According to the proposal, the local variable will be inferred to its most specific type. So, in the example above, the variable list will be of type ArrayList<String> and not just List<String> or Collection<String>. This makes sense and if we want to demote our objects, we simply have to declare them as we always did.

There is a debate over the meaning of var wether it should also mean that the variable should become automatically final. By analyzing a huge corpus of Java code, the team's main proposition now is that variables declared with var is, in fact, not final and if one wants it final, it could simply be declared as final var or there might be another syntax like val that effectively is the same thing as final var.

What Will Not Work?

Local type inference cannot be used for variables with no initializers or initializers with just null values. This is obvious since it is not possible to infer the value type from such statements. Examples of code that will not work is:


        var x;
            ^
  (cannot use 'val' on variable without initializer)

        var f = () -> { };
            ^
  (lambda expression needs an explicit target-type) 

        var g = null;
            ^
  (variable initializer is 'null')

        var c = l();
            ^
  (inferred type is non denotable)

        var m = this::l;
            ^
  (method reference needs an explicit target-type)

        var k = { 1 , 2 };
            ^
  (array initializer needs an explicit target-type) 
 

Usage

Local Type Inference would be restricted to local variables with initializers, indexes in the enhanced for-loop, and locals declared in a traditional for-loop. It would not be available for method formals, constructor formals, method return types, fields, catch formals, or any other kind of variable declaration.

A final note is that the word var is proposed to be a reserved type name and not a Java keyword. This means that code that uses var as a variable, method, or package name will not be affected; code that uses var as a class or interface name will be affected (but these names violate the Java naming conventions anyhow).

Read more on the JEP 286 here. You can also provide feedback to the JEP guys using a link on the JEP page.

Java 8: Declare Private and Protected Methods in Interfaces

Learn How to Declare Private and Protected Methods in Java 8 Interfaces


When Java 8 was introduced, we could use default methods in interfaces. The main driver for this feature was to allow expansion of an interface while retaining backward compatibility for older interface versions. One example is the introduction of the stream() method in the existing Collection classes.

Sometimes, when we want to introduce several default methods, they may share some common code base and then it would be nice if we could use private methods in the interface. This way, we can reuse our code and also prevent it from being exposed to classes that are using or are implementing the interface.

But there is a problem. Private and protected access in interfaces were postponed to Java 9. So how can we use private interface methods in Java 8 today?

A Simple Solution


Suppose that we have an interface Foo with two methods; bar() and bazz() that both are to return some hard-to-calculate result emanating form some shared code like this:

public interface Foo {

    default int bar() {
        return complicatedMethodWithManyLinesOfCode();
    }

    default int bazz() {
        return complicatedMethodWithManyLinesOfCode() + 1;
    }

    
    // Will not work in Java 8 because interface methods cannot be private!
    private int complicatedMethodWithManyLinesOfCode() {
        // Actual code not shown...
        return 0;
    }

}
By introducing a class that holds the private method, we can "hide" the method from outside access and almost get away with private methods in Java 8 interface. It can be done like this:
public interface Foo {

    default int bar() {
        return Hidden.complicatedMethodWithManyLinesOfCode();
    }

    default int bazz() {
        return Hidden.complicatedMethodWithManyLinesOfCode() + 1;
    }

    class Hidden {

        private static int complicatedMethodWithManyLinesOfCode() {
            // Actual code not shown...
            return 0;
        }
    }

}
The method Foo:complicatedMethodWithManyLinesOfCode is not visible from outside classes or interfaces but the Hidden class itself can be seen. However, methods and fields in Hidden cannot be seen if they are private.

This scheme can also be applied for protected interface method access. Technically, we could extend the Hidden class in an interface that also extends the original interface Foo. Remember that protected methods are also package visible, so if we extend or use the interface from the same package, the protected methods are visible (as they always are).

One drawback is that the hidden methods cannot access other methods in the interface. This latter drawback can easily be fixed by letting the hidden static method take a parameter of the interface type. Suppose that the complicatedMethodWithManyLinesOfCode method needs another value from the Foo interface that can be obtained via some interface method named buzz(), then it could look something like this:
public interface Foo {

    default int bar() {
        return Hidden.complicatedMethodWithManyLinesOfCode(this);
    }

    default int bazz() {
        return Hidden.complicatedMethodWithManyLinesOfCode(this) + 1;
    }

    int buzz();

    class Hidden {

        private static int complicatedMethodWithManyLinesOfCode(Foo foo) {
            // Actual code not shown...
            return 0 + foo.buzz();
        }
    }

}

Monday, March 7, 2016

Java: Immortal Objects and Object Resurrection

What is Object Resurrection?

A Java object is eligible for Garbage Collection when no other object references the object. When the JVM:s Garbage Collector eventually is about to remove an unused object, the object's finalize() method is invoked. But, if we re-create a reference to the object again in the object's own finalize() method, the object can be resurrected. In such cases, the JVM will detect that the object is again referenced and will refrain from removing it. Metaphorically, the object has been resurrected from death...

public class Immortal {

    private static final Set<Immortal> immortals = new HashSet<>();

    @Override
    protected void finalize() throws Throwable {
        System.out.println(Immortal.class.getSimpleName() + "::finalize for " + this);
        immortals.add(this); // Resurrect the object by creating a new reference 
    }

}
The resurrection property can be tested the following way:
public class NewMain {

    public static void main(String[] args) {
        new Immortal();
        System.gc();
        sleep(1_000);
        System.gc();
        prompt("Press any key...");
    }

    private static void prompt(String msg) {
        try {
            System.out.println(msg);
            System.in.read();
        } catch (IOException io) {
        }
    }

    private static void sleep(long duration) {
        try {
            Thread.sleep(duration);
        } catch (InterruptedException ie) {
        }
    }

}
Which will give the following output:
Immortal::finalize for com.blogspot.minborgsjavapot.resurected_object.Immortal@635cb856
Press any key...
By inspecting the Java heap, we can also see that the object is still there despite its finalizer was called:
pemi$ jps
21735 NewMain
21736 Jps

pemi$ jmap -histo 21735 | grep Immortal
 164:             1             16  com.blogspot.minborgsjavapot.resurected_object.Immortal

How Many Times is the Finalizer Invoked?

If a resurrected object is later de-referenced, it is again eligible for Garbage Collection. However, this time the finalize() method will not be invoked again since Java only invokes the finalizer at most one time. As we may recall, there is no guarantee that the finalizer is ever invoked. For example, if the program terminates for any reason, the objects in the JVM are simply abandoned and their finalizers will not be invoked at all as can be seen in this example:

public class NewMain2 {

    public static void main(String[] args) {
        new Immortal();
    }

}

When we run the above code snippet, we observe that the Immortal::finalizer is never called.

Is Object Resurrection Good?

As always when using the finalize() method, we must be very cautious. The general recommendation for us Java developers is to not use finalize() at all. Furthermore, one could argue that resurrecting an object is the same as intentionally creating a memory leak.

However, there are some interesting applications for object resurrection. Perhaps we want to do some post-mortal analysis of our objects without changing the actual application that are using the objects. By using object resurrection, we could save those objects and analyze their internal state later, independently of the applications that are using them.

Saturday, March 5, 2016

Java 8: A Type Safe Map Builder Using Alternating Interface Exposure

Expose Your Classes Dynamically

Duke and Spire exposing another look... 
When I was a Java newbie, I remember thinking that there should be a way of removing or hiding methods in my classes that I did not want to expose. Like overriding a public method with a private or something like that (which of corse cannot and should not be possible). Obviously today, we all know  that we could achieve the same goal by exposing an interface instead.

By using a scheme named Alternating Interface Exposure, we could view a class' methods dynamically and type safe, so that the same class can enforce a pattern in which it is supposed to to be used.

Let me take an example. Let's say we have a Map builder that can be called by successively adding keys and values before the actual Map can be built. The Alternating Interface Exposure scheme allows us to ensure that we call the key() method and the value() exactly the same number of times and that the build() method is only callable (and seen, for example in the IDE) when there are just as many keys as there are values.

The Alternating Interface Exposure scheme is used in the open-source project Speedment that I am contributing to. In Speedment, the scheme is for example used when building type-safe Tuples that subsequently will be built after adding elements to a TupleBuilder. This way, we can get a typed Tuple2<String, Integer> = {"Meaning of Life", 42}, if we write TupleBuilder.builder().add("Meaning of Life).add(42).build().

Using a Dynamic Map Builder

I have written about the Builder Pattern several times in some of my previous posts (e.g. here) and I encourage you to revisit an article on this issue, should you not be familiar with the concept, before reading on.

The task at hand is to produce a Map builder that dynamically exposes a number of implementing methods using a number of context dependent interfaces. Furthermore, the builder shall "learn" its key/value types the first time they are used and then enforce the same type of keys and values for the remaining entries.

Here is an example of how we could use the builder in our code once it is developed:
    public static void main(String[] args) {

        // Use the type safe builder
        Map<Integer, String> map = Maps.builder()
                .key(1)                 // The key type is decided here for all following keys
                .value("One")           // The value type is decided here for all following values
                .key(2)                 // Must be the same or extend the first key type
                .value("Two")           // Must be the same type or extend the first value type
                .key(10).value("Zehn'") // And so on...
                .build();               // Creates the map!

        // Create an empty map
        Map<String, Integer> map2 = Maps.builder()
                .build();
        
        
    }

}

In the code above, once we start using an Integer using the call key(1), the builder only accepts additional keys that are instances of Integer. The same is true for the values. Once we call value("one"), only objects that are instances of String can be used. If we try to write value(42) instead of value("two") for example, we would immediately see the error in our IDE. Also, most IDE:s would automatically be able to select good candidates when we use code completion.

Let me elaborate on the meaning of this:

Initial Usage

The builder is created using the method Maps.builder() and the initial view returned allows us to call:
  1. build() that builds an empty Map (like in the second "empty map" example above)
  2. key(K key) that adds a key to the builder and decides the type (=K) for all subsequent keys (like key(1) above)

Once the initial key(K key) is called, another view of the builder appears exposing only:
  1. value(V value) that adds a value to the builder and decides the type (=V) for all subsequent values (like value("one"))

Note that the build() method is not exposed in this state, because the number of keys and values differ. Writing Map.builder().key(1).build(); is simply illegal, because there is no value associated with key 1.

Subsequent Usage

Now that the key and value types are decided, the builder would just alternate between two alternating interfaces being exposed depending on if key() or value() is being called. If key() is called, we expose value() and if value() is called, we expose both key() and build().

The Builder

Here are the two alternating interfaces that the builder is using once the types are decided upon:
public interface KeyBuilder<K, V> {

        ValueBuilder<K, V> key(K k);
        
        Map<K, V> build();
    
}

public interface ValueBuilder<K, V> {

    KeyBuilder<K, V> value(V v);

}

Note how one interface is returning the other, thereby creating an indefinite flow of alternating interfaces being exposed. Here is the actual builder that make use of the alternating interfaces:
public class Maps<K, V> implements KeyBuilder<K, V>, ValueBuilder<K, V> {

    private final List<Entry<K, V>> entries;
    private K lastKey;

    public Maps() {
        this.entries = new ArrayList<>();
    }

    @Override
    public ValueBuilder<K, V> key(K k) {
        lastKey = k;
        return (ValueBuilder<K, V>) this;
    }

    @Override
    public KeyBuilder<K, V> value(V v) {
        entries.add(new AbstractMap.SimpleEntry<>(lastKey, v));
        return (KeyBuilder<K, V>) this;
    }

    @Override
    public Map<K, V> build() {
        return entries.stream()
                .collect(toMap(Entry::getKey, Entry::getValue));
    }

    public static InitialKeyBuilder builder() {
        return new InitialKeyBuilder();
    }

}

We see that the implementing class implements both of the alternating interfaces but only return one of them depending on if key() or value() is called. I have "cheated" a bit by created two initial help classes that take care about the initial phase where the key and value types are not yet decided. For the sake of completeness, the two "cheat" classes are also shown hereunder:
public class InitialKeyBuilder {

    public <K> InitialValueBuilder<K> key(K k) {
        return new InitialValueBuilder<>(k);
    }
    
    public <K, V> Map<K, V> build() {
        return new HashMap<>();
    }

}

public class InitialValueBuilder<K> {
    
    private final K k;

    public InitialValueBuilder(K k) {
        this.k = k;
    }
    
    public <V> KeyBuilder<K, V> value(V v) {
        return new Maps<K, V>().key(k).value(v);
    }

}

These latter classes work in a similar fashion as the main builder in the way that the InitialKeyBuilder returns a InitialValueBuilder that in turn, creates a typed builder that would be used indefinitely by alternately returning either a KeyBuilder or a ValueBuilder.

Conclusions

The Alternating Interface Exposure scheme is useful when you want a type safe and context aware model of your classes. You can develop and enforce a number of rules for your classes using this scheme. These classes will be much more intuitive to use, since the context sensitive model and its types propagate all the way out to the IDE. The schema also gives more robust code, because potential errors are seen very early in the design phase. We will see potential errors as we are coding and not as failed tests or application errors.