Tuesday, March 4, 2014

Markdown editor for blogger

Markdown is quite popular these days and I like it for it’s simplicity. So I’ve been searching for a way to integrate it in my blog writing workflow. In Blogger, it’s unfortunately not an option.

There I could go:

  • either for WYSIWYG editor or
  • manual HTML writing.

However WYSIWYG generates too much HTML I don’t need. Moreover some pre-publish manual HTML editing is a pain then. And the other option I consider it rather slow/unproductive.

So, I looked for other options, and found stackedit.io. Even this post is (and couple others as well) written in it.


I consider Stackedit great as it:

  • is Markdown editor,
  • is open source (see github repo),
  • has awesome UI (including live preview, key bindings, …),
  • has integration with other popular services that I use anyway:
  • has active community (just check the stars count and commit activity on the github repo) and
  • has impressive feedback time (resolution on my issues/questions came in couple hours)

So my post writing/editing workflow goes like this:

  1. write/edit post on stackedit.io,
  2. sync it to google drive and
  3. publish/republish it to blogger.

That’s it! No further in-blogger updates required!

Blogger specifics

Still there are some specifics in my workflow (to provide smooth blogger integration).

  • using the Interpreted variables: title and tags via:
title: Markdown editor for blogger
tags: blogger markdown stackedit.io

p6spy 2.0.0 is out!

After 8 years P6spy came to it’s next stable release!
You can get it here.

The last stable release happened (based on maven central repo) on 27-Dec-2005 (1.3 version). That is quite some time, so one would expect many things to happen in the meantime. Well the truth is that project was (half) dead for quite some time.


I can’t comment on full history since 1.3 release (since my interest in project started last summer), still I’ve noticed following:

  • project hosting was moved from sourceforge to github,
  • major part of the legacy code was refactored,
  • Java 6/7 JDBC API support introduced,
  • proxying via modified JDBC URLs only was implemented,
    So for for MySQL original url would be:


    the one proxied via p6spy would one:


    without a need for any further configuration,

  • XA Datasource support has been introduced,
  • configuration via:
    • system/environment properties and
    • JMX properties
    • as an alternative to file configuration only
    • or even zero config use case supported,
  • slf4j support (more flexible as previously used log4j),
  • junit tests were migrated to junit 4 (well most of the old ones were failing anyway),
  • Continuous integration using Travis was setup providing testing on popular:
    • DB systems (namely: Oracle, DB2, PostgreSQL, MySQL, H2, HSQLDB, SQLite, Firebird, and Derby), see build status on: travis-ci as well as
    • application servers (namely: Wildfly 8, JBoss 4.2, 5.1, 6.1, 7.1, Glassfish 3.1, 4.0, Jetty 7.6, 8.1, 9.1, Tomcat 6, 7, 8, Resin 4, Jonas 5.3 and Geronimo 2.1, 2.2), see build status on: travis-ci.

Full changelog

For the full changelog, see issues fixed in: 2.0.0-alpha1 as well as 2.0.0.

Postgres and Oracle compatibility with Hibernate

There are situations your JEE application needs to support Postgres and Oracle as a Database.
Hibernate should do the job here, however there are some specifics worth mentioning.
While enabling Postgres for application already running Oracle I came across following tricky parts:

  • BLOBs support,
  • CLOBs support,
  • Oracle not knowing Boolean type (using Integer) instead and
  • DUAL table.

These were the tricks I had to apply to make the @Entity classes running on both of these.

Please note I’ve used Postgres 9.3 with Hibernate 4.2.1.SP1.

BLOBs support

The problem with Postgres is that it offers 2 types of BLOB storage:

  • bytea - data stored in table
  • oid - table holds just identifier to data stored elsewhere

I guess in the most of the situations you can live with the bytea as well as I did. The other one as far as I’ve read is to be used for some huge data (in gigabytes) as it supports streams for IO operations.

Well, it sounds nice there is such a support, however using Hibernate in this case can make things quite problematic (due to need to use the specific annotations), especially if you try to achieve compatibility with Oracle.

To see the trouble here, see StackOverflow: proper hibernate annotation for byte[]

All- the combinations are described there:

annotation                   postgres     oracle      works on
byte[] + @Lob                oid          blob        oracle
byte[]                       bytea        raw(255)    postgresql
byte[] + @Type(PBA)          oid          blob        oracle
byte[] + @Type(BT)           bytea        blob        postgresql

where @Type(PBA) stands for: @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType") and @Type(BT) stands for: @Type(type="org.hibernate.type.BinaryType").

These result in all sorts of Postgres errors, like:

ERROR: column “foo” is of type oid but expression is of type bytea


ERROR: column “foo” is of type bytea but expression is of type oid

Well, there seems to be a solution, still it includes patching of Hibernate library (something I see as the last option when playing with 3.rd party library).

There is also a reference to official blog post from the Hibernate guys on the topic: PostgreSQL and BLOBs. Still solution described in blog post seems not working for me and based on the comments, seems to be invalid for more people.

BLOBs solved

OK, so now the optimistic part.

After quite some debugging I ended up with the Entity definition like this :

private byte[] foo;

Oracle has no trouble with that, moreover I had to customize the Postgres dialect in a way:

public class PostgreSQLDialectCustom extends PostgreSQL82Dialect {

    public SqlTypeDescriptor remapSqlTypeDescriptor(SqlTypeDescriptor sqlTypeDescriptor) {
    if (sqlTypeDescriptor.getSqlType() == java.sql.Types.BLOB) {
      return BinaryTypeDescriptor.INSTANCE;
    return super.remapSqlTypeDescriptor(sqlTypeDescriptor);

That’s it! Quite simple right? That works for persisting to bytea typed columns in Postgres (as that fits my usecase).

CLOBs support

The errors in misconfiguration looked something like this:

org.postgresql.util.PSQLException: Bad value for type long : ...

So first I’ve found (on String LOBs on PostgreSQL with Hibernate 3.6) following solution:

@Type(type = "org.hibernate.type.TextType")
private String foo;

Well, that works, but for Postgres only.

Then there was a suggestion (on StackOverflow: Postgres UTF-8 clobs with JDBC) from to go for:

private String foo;

That pointed me the right direction (the funny part was that it was just a comment to some answers). It was quite close, but didn’t work for me in all cases, still resulted in errors in my tests.

CLOBs solved

The important was @deprecation javadocs in the org.hibernate.type.StringClobType that brought me to working one:

private String foo;

That works for both Postgres and Oracle, without any further hacking (on Hibernate side) needed.

Boolean type

Oracle knows no Boolean type and the trouble is that Postgres does. As there was also some plain SQL present, I ended up In Postgres with error:

ERROR: column “foo” is of type boolean but expression is of type integer

I decided to enable cast from Integer to Boolean in Postgres rather than fixing all the plain SQL places (in a way found in Forum: Automatically Casting From Integer to Boolean):

update pg_cast set castcontext = 'i' where oid in ( select c.oid from pg_cast c inner join pg_type src on src.oid = c.castsource inner join pg_type tgt on tgt.oid = c.casttarget where src.typname like 'int%' and tgt.typname like 'bool%');

Please note you should run the SQL update by user with provileges to update catalogs (probably not your postgres user used for DB connection from your application), as I’ve learned on Stackoverflow: Postgres - permission denied on updating pg_catalog.pg_cast.

DUAL table

There is one more specific in the Oracle I came across. If you have plain SQL, in Oracle there is DUAL table provied (see more info on Wikipedia on that) that might harm you in Postgres.

Still the solution is simple. In Postgres create a view that would fill the similar purpose. It can be created like this:

create or replace view dual as select 1;


Well that should be it. Enjoy your cross DB compatible JEE apps.

Tuesday, January 21, 2014

yeoman - testing with mocha and chai instead of jasmine

I've been playing around with yeoman these days. It seems like project worth a try for my free time single page application.

After the inital setup (using AngularJS generator) I decided to let my tests run with mocha + chai (with expect asertions) rather than jasmine (generated by default).

In fact it was super-simple. The diffs that matter:
diff --git a/yo/karma.conf.js b/yo/karma.conf.js
index bc0e168..385045f 100644
--- a/yo/karma.conf.js
+++ b/yo/karma.conf.js
@@ -7,7 +7,7 @@ module.exports = function(config) {
     basePath: '',
     // testing framework to use (jasmine/mocha/qunit/...)
-    frameworks: ['jasmine'],
+    frameworks: ['mocha', 'chai'],
     // list of files / patterns to load in the browser
     files: [
diff --git a/yo/package.json b/yo/package.json
index 3e3db14..9649a55 100644
--- a/yo/package.json
+++ b/yo/package.json
@@ -34,7 +34,8 @@
     "karma-html2js-preprocessor": "~0.1.0",
     "karma-firefox-launcher": "~0.1.3",
     "karma-script-launcher": "~0.1.0",
-    "karma-jasmine": "~0.1.5",
+    "karma-mocha": "~0.1.1",
+    "karma-chai": "~0.0.2",
     "karma-coffee-preprocessor": "~0.1.2",
     "requirejs": "~2.1.10",
     "karma-requirejs": "~0.2.1",
diff --git a/yo/test/spec/controllers/main.js b/yo/test/spec/controllers/main.js
index eb7cda2..f9becde 100644
--- a/yo/test/spec/controllers/main.js
+++ b/yo/test/spec/controllers/main.js
@@ -17,6 +17,6 @@ describe('Controller: MainCtrl', function () {
   it('should attach a list of awesomeThings to the scope', function () {
-    expect(scope.awesomeThings.length).toBe(3);
+    expect(scope.awesomeThings.length).to.equal(3);
Afterwards just running (to fetch missing packages):
npm install
and tests succeed:
grunt test

Tuesday, December 17, 2013

Liquibase - implementing custom type.

There are some things to keep in mind while implementing custom type in liquibase. Let's see how to create one.

Existing database types

Well, this might be the biggest advantage of using/hacking the open source projects. You have a chance to see how are things done. This is useful to find your inspiration, the places that require modifications and the hooks to existing system.

Classes worth checking for our purposes are in the package:
You can browse them directly on github.

Sample custom BlobType implementation

Let's assume we're about to modify BlobType. Our implementation could look like this:
package liquibase.datatype.core;

import liquibase.database.Database;
import liquibase.database.core.PostgresDatabase;
import liquibase.datatype.DataTypeInfo;
import liquibase.datatype.DatabaseDataType;
import liquibase.datatype.LiquibaseDataType;

@DataTypeInfo(name = "blob", aliases = { "longblob", "longvarbinary", "java.sql.Types.BLOB",
    "java.sql.Types.LONGBLOB", "java.sql.Types.LONGVARBINARY", "java.sql.Types.VARBINARY",
    "varbinary" }, minParameters = 0, maxParameters = 0, priority = LiquibaseDataType.PRIORITY_DATABASE)
public class BlobTypeTest extends BlobType {
  public DatabaseDataType toDatabaseDataType(Database database) {
    // handle the specifics here, you can go for the per DB specifics, let's assume Postgres
    if (database instanceof PostgresDatabase) {
       // your custom type here
    // use defaults for all the others
    return super.toDatabaseDataType(database);


Specifics to keep in mind

There are some specifics that should be considered:
  • DataType priority is important

    To make sure our type will be considered in favor to default implementation. We're going for:
    priority = LiquibaseDataType.PRIORITY_DATABASE
    where the default one (in the supertype) is:
    priority = LiquibaseDataType.PRIORITY_DEFAULT
    This just means that our implementation should be considered in favor to default one.

    See method:
    implementation for details.
  • DataType registration considers specific packages to be scanned only

    We have more options here, but our stuff should go to any of these:
    • any of those listed in jar/MANIFEST.MF property:
      Where the default set in the liquibase-core-3.0.8.jar is:
      Liquibase-Package: liquibase.change,liquibase.database,liquibase.parse
    • comma separated custom package list provided via system property called:
    • if all of the above are empty, note the fallback package list is used. As implementation says:
      if (packagesToScan.size() == 0) {

    See method:
    implementation for details.

    Well, as you might have noticed, I'm lazy enough as I went for the already registered package:
    so it worked once my implementation is on the classpath.
That should be it.

Debugging ant task

It's allways good idea to debug once playing around with the custom changes in the 3.rd party code.
As I went for ant task, just addopted ANT_OPTS variable. In my case (as I'm on linux) following worked:
export ANT_OPTS="-Xdebug -agentlib:jdwp=transport=dt_socket,server=y,address=8000"
remote debugging was possible afterwards.

To check if custom type is registered, check in the debug session the constructor:
local variable "classes" contents after line:
classes = ServiceLocator.getInstance().findClasses(LiquibaseDataType.class);
to see all the types found.

That should provide you the basics on liquibase types hacking.

Monday, December 16, 2013

Oracle JDK 1.7.0_u45 incompatibility with Glassfish 3.x (Corba)

Q: Did you give a try to Glassfish 3.1.x with latest stable JDK (1.7.0_u45)?
A: Well, you might be interested in possible trouble there.

After quite some debugging of weird error happening, namely:
Caused by: java.lang.ArrayIndexOutOfBoundsException: -1636631191
at java.util.HashMap.put(HashMap.java:498)
at java.util.HashSet.add(HashSet.java:217)
I ended up creating: https://java.net/jira/browse/GLASSFISH-20927 (Btw, it was quite weird, but I'm not allowed to add attachments in their jira :)
The problem seems to be in internals of the HashSet -> HashMap. Where Corba doesn't seem to set the expected reference to EMPTY_TABLE in case of empty Set. Let's see how they proceed with the analysis.

Tuesday, October 22, 2013

My Firefox browsing privacy settings

I decided to go for a bit more privacy for daily browsing. I don't think I need full security (I'm active in social networks anyway, so that would be the 1.st thing to cut), but let's say at least some. I've been influenced of the duckduckgo's donttrack.us and fixtracking.com pages.

I'm running Linux, so for the browser I have 2 mainstream options:
  • Chrome/Chromium
  • Firefox
The Chrome throws
Segmentation fault (core dumped)
at me (on both of my linux machines - Xubuntu/Fedora). Due to lack of motivation I didn't investigate any further (possibly some misbehaving extension). So my options narrow to Firefox only. But that is OK with me. As I'm used to it after some years of usage.

Firefox setup

I'm using duckduckgo.com as my default search engine in both
  • Location bar:
    • To set it, go to page:
      and set property:
  • Search bar:
    • To set it click down arrow in the Search bar and choose "Manage Search Engines ..." where move DuckDuckGo to the very top
So I can escape from search + profile association promoting me next day all the stuff I searched for a couple days ago.


Moreover I go for the following add-ons:

Unsecure part

Well there are some specifics in my setup. As mentioned earlier I'm using social networks for posting, and as I'm lazy enough and like posting with minimal effort, I use also ShareThis. However since installing all the previously mentioned, ShareThis stopped working. Following are the tweaks I had to do to enable it again:
  • DoNoTrackMe - I clicked on toolbar icon and then options wheel in the corner, where I disabled "ShareThis" blocking,
  • moreover as Sharethis page rendered on sharing has problems with HTTPs certificate, I had to deactivate it in the - HTTPS Everywhere - as well.
  • Once I'm ready to share site and click Sharethis button in location bar (nothing happens so) I have to go for "Disconnect me" toolbar button and choose "Advertising" category and uncheck "Sharethis". I need to click Sharethis button once again then.
That's it for my setup. I don't think I improved my privacy in a radical way, but let's say I went for options that don't hurt my common browsing experience that much. Some (like ads blocking) make it even more pleasant.