Tuesday, January 21, 2014

yeoman - testing with mocha and chai instead of jasmine

I've been playing around with yeoman these days. It seems like project worth a try for my free time single page application.

After the inital setup (using AngularJS generator) I decided to let my tests run with mocha + chai (with expect asertions) rather than jasmine (generated by default).

In fact it was super-simple. The diffs that matter:
diff --git a/yo/karma.conf.js b/yo/karma.conf.js
index bc0e168..385045f 100644
--- a/yo/karma.conf.js
+++ b/yo/karma.conf.js
@@ -7,7 +7,7 @@ module.exports = function(config) {
     basePath: '',
 
     // testing framework to use (jasmine/mocha/qunit/...)
-    frameworks: ['jasmine'],
+    frameworks: ['mocha', 'chai'],
 
     // list of files / patterns to load in the browser
     files: [
diff --git a/yo/package.json b/yo/package.json
index 3e3db14..9649a55 100644
--- a/yo/package.json
+++ b/yo/package.json
@@ -34,7 +34,8 @@
     "karma-html2js-preprocessor": "~0.1.0",
     "karma-firefox-launcher": "~0.1.3",
     "karma-script-launcher": "~0.1.0",
-    "karma-jasmine": "~0.1.5",
+    "karma-mocha": "~0.1.1",
+    "karma-chai": "~0.0.2",
     "karma-coffee-preprocessor": "~0.1.2",
     "requirejs": "~2.1.10",
     "karma-requirejs": "~0.2.1",
diff --git a/yo/test/spec/controllers/main.js b/yo/test/spec/controllers/main.js
index eb7cda2..f9becde 100644
--- a/yo/test/spec/controllers/main.js
+++ b/yo/test/spec/controllers/main.js
@@ -17,6 +17,6 @@ describe('Controller: MainCtrl', function () {
   }));
 
   it('should attach a list of awesomeThings to the scope', function () {
-    expect(scope.awesomeThings.length).toBe(3);
+    expect(scope.awesomeThings.length).to.equal(3);
   });
 });
Afterwards just running (to fetch missing packages):
npm install
and tests succeed:
grunt test

Tuesday, December 17, 2013

Liquibase - implementing custom type.

There are some things to keep in mind while implementing custom type in liquibase. Let's see how to create one.

Existing database types

Well, this might be the biggest advantage of using/hacking the open source projects. You have a chance to see how are things done. This is useful to find your inspiration, the places that require modifications and the hooks to existing system.

Classes worth checking for our purposes are in the package:
liquibase.datatype.core
You can browse them directly on github.

Sample custom BlobType implementation

Let's assume we're about to modify BlobType. Our implementation could look like this:
package liquibase.datatype.core;

import liquibase.database.Database;
import liquibase.database.core.PostgresDatabase;
import liquibase.datatype.DataTypeInfo;
import liquibase.datatype.DatabaseDataType;
import liquibase.datatype.LiquibaseDataType;

@DataTypeInfo(name = "blob", aliases = { "longblob", "longvarbinary", "java.sql.Types.BLOB",
    "java.sql.Types.LONGBLOB", "java.sql.Types.LONGVARBINARY", "java.sql.Types.VARBINARY",
    "varbinary" }, minParameters = 0, maxParameters = 0, priority = LiquibaseDataType.PRIORITY_DATABASE)
public class BlobTypeTest extends BlobType {
  
  public DatabaseDataType toDatabaseDataType(Database database) {
    // handle the specifics here, you can go for the per DB specifics, let's assume Postgres
    if (database instanceof PostgresDatabase) {
       // your custom type here
    }
    // use defaults for all the others
    return super.toDatabaseDataType(database);
  }

}

Specifics to keep in mind

There are some specifics that should be considered:
  • DataType priority is important

    To make sure our type will be considered in favor to default implementation. We're going for:
    priority = LiquibaseDataType.PRIORITY_DATABASE
    
    where the default one (in the supertype) is:
    priority = LiquibaseDataType.PRIORITY_DEFAULT
    
    This just means that our implementation should be considered in favor to default one.

    See method:
    liquibase.datatype.DataTypeFactory.register()
    
    implementation for details.
  • DataType registration considers specific packages to be scanned only

    We have more options here, but our stuff should go to any of these:
    • any of those listed in jar/MANIFEST.MF property:
      Liquibase-Package
      
      Where the default set in the liquibase-core-3.0.8.jar is:
      Liquibase-Package: liquibase.change,liquibase.database,liquibase.parse
       r,liquibase.precondition,liquibase.datatype,liquibase.serializer,liqu
       ibase.sqlgenerator,liquibase.executor,liquibase.snapshot,liquibase.lo
       gging,liquibase.diff,liquibase.structure,liquibase.structurecompare,l
       iquibase.lockservice,liquibase.ext
      
    • comma separated custom package list provided via system property called:
      liquibase.scan.packages
      
    • if all of the above are empty, note the fallback package list is used. As implementation says:
      if (packagesToScan.size() == 0) {
      	addPackageToScan("liquibase.change");
      	addPackageToScan("liquibase.database");
      	addPackageToScan("liquibase.parser");
      	addPackageToScan("liquibase.precondition");
      	addPackageToScan("liquibase.datatype");
      	addPackageToScan("liquibase.serializer");
      	addPackageToScan("liquibase.sqlgenerator");
      	addPackageToScan("liquibase.executor");
      	addPackageToScan("liquibase.snapshot");
      	addPackageToScan("liquibase.logging");
      	addPackageToScan("liquibase.diff");
      	addPackageToScan("liquibase.structure");
      	addPackageToScan("liquibase.structurecompare");
      	addPackageToScan("liquibase.lockservice");
      	addPackageToScan("liquibase.ext");
      }
      

    See method:
    ServiceLocator.setResourceAccessor()
    
    implementation for details.

    Well, as you might have noticed, I'm lazy enough as I went for the already registered package:
    liquibase.datatype.core
    
    so it worked once my implementation is on the classpath.
That should be it.

Debugging ant task

It's allways good idea to debug once playing around with the custom changes in the 3.rd party code.
As I went for ant task, just addopted ANT_OPTS variable. In my case (as I'm on linux) following worked:
export ANT_OPTS="-Xdebug -agentlib:jdwp=transport=dt_socket,server=y,address=8000"
remote debugging was possible afterwards.

To check if custom type is registered, check in the debug session the constructor:
DataTypeFactory.DataTypeFactory()
local variable "classes" contents after line:
classes = ServiceLocator.getInstance().findClasses(LiquibaseDataType.class);
to see all the types found.

That should provide you the basics on liquibase types hacking.

Monday, December 16, 2013

Oracle JDK 1.7.0_u45 incompatibility with Glassfish 3.x (Corba)

Q: Did you give a try to Glassfish 3.1.x with latest stable JDK (1.7.0_u45)?
A: Well, you might be interested in possible trouble there.

After quite some debugging of weird error happening, namely:
Caused by: java.lang.ArrayIndexOutOfBoundsException: -1636631191
at java.util.HashMap.put(HashMap.java:498)
at java.util.HashSet.add(HashSet.java:217)
...
I ended up creating: https://java.net/jira/browse/GLASSFISH-20927 (Btw, it was quite weird, but I'm not allowed to add attachments in their jira :)
The problem seems to be in internals of the HashSet -> HashMap. Where Corba doesn't seem to set the expected reference to EMPTY_TABLE in case of empty Set. Let's see how they proceed with the analysis.

Tuesday, October 22, 2013

My Firefox browsing privacy settings

I decided to go for a bit more privacy for daily browsing. I don't think I need full security (I'm active in social networks anyway, so that would be the 1.st thing to cut), but let's say at least some. I've been influenced of the duckduckgo's donttrack.us and fixtracking.com pages.

I'm running Linux, so for the browser I have 2 mainstream options:
  • Chrome/Chromium
  • Firefox
The Chrome throws
Segmentation fault (core dumped)
at me (on both of my linux machines - Xubuntu/Fedora). Due to lack of motivation I didn't investigate any further (possibly some misbehaving extension). So my options narrow to Firefox only. But that is OK with me. As I'm used to it after some years of usage.

Firefox setup

I'm using duckduckgo.com as my default search engine in both
  • Location bar:
    • To set it, go to page:
      about:config 
      
      and set property:
      keyword.URL 
      
      to:
      https://duckduckgo.com/?q=
      
    and
  • Search bar:
    • To set it click down arrow in the Search bar and choose "Manage Search Engines ..." where move DuckDuckGo to the very top
So I can escape from search + profile association promoting me next day all the stuff I searched for a couple days ago.

Add-ons

Moreover I go for the following add-ons:

Unsecure part

Well there are some specifics in my setup. As mentioned earlier I'm using social networks for posting, and as I'm lazy enough and like posting with minimal effort, I use also ShareThis. However since installing all the previously mentioned, ShareThis stopped working. Following are the tweaks I had to do to enable it again:
  • DoNoTrackMe - I clicked on toolbar icon and then options wheel in the corner, where I disabled "ShareThis" blocking,
  • moreover as Sharethis page rendered on sharing has problems with HTTPs certificate, I had to deactivate it in the - HTTPS Everywhere - as well.
  • Once I'm ready to share site and click Sharethis button in location bar (nothing happens so) I have to go for "Disconnect me" toolbar button and choose "Advertising" category and uncheck "Sharethis". I need to click Sharethis button once again then.
That's it for my setup. I don't think I improved my privacy in a radical way, but let's say I went for options that don't hurt my common browsing experience that much. Some (like ads blocking) make it even more pleasant.

Eclipse toolbar housekeeping

It took me quite some time, since I really started to care about all the wasted space in my IDE of choice - Eclipse IDE. The problem is that toolbar contains elements that I never use. Reasons might differ. Some I don't need, for some I use keyboard shortcuts (that I consider faster to use) for the rest I don't have a clue what they're intended for.

Once I checked the options, motivated by the removal of "Quick access" element eating my toolbar space (since Eclipse Juno version) I've found the stackoverflow post providing even more help than I originally expected.

It correctly pointed me to yet unresolved Eclipse bug, however that was not what I was looking for. Still there were other answers giving me what I came for.

Removing "Quick access" element

As this answer pointed, adding:
#SearchField {
   visibility:hidden;
}
to the:
<ECLIPSE_HOME>/plugins/org.eclipse.platform_<VERSION>/css/e4_basestyle.css
and restarting Eclipse does the job.

Removing toolbar buttons

As the other answer pointed, going for: Window -> Customize Prespective ... I was able to get rid of the buttons (I don't need) in my toolbar.
Please note: this should be done on per perspective basis.

Conclusion

Since applying the mentioned, my toolbar behaves and takes one row of my vertical space only. In fact, it's like half empty :)

Sunday, October 20, 2013

How I left the output redirection to file.

If you feel with linux command line at home, there is often a situation you pipe command outputs. These enable us doing amazing things.

However there are times, when console is not the right place to examine the final output and your favourite editor could do much better job.
In the past I used to output to file and open afterwards, until,... I've noticed it's possible to pipe directly to editor.

Let's assume we're interested in reading tail output in our editor of choice.

Gedit

As my default editor used to be Gedit for quite some time, let's see it in action:
tail -f some.log | gedit
No surprise :), I guess. No extra arguments. Nice!

Morever gedit even provides me with the indicatior (displayed on the tab), that loading is in progress and automatically refreshes contents.

Gvim

I'm in process of transition to gvim usage. I've heard about it's power, never really had a chance to dive deep there. However after watching some vimcasts and reading couple reviews, I'm quite amazed. I'm still trying to memorize the keys for specific tasks (that should be just a question of time and frequency of usage of particular ones).

I made it already my primary editor (wherever I used to use Gedit before).

OK, let's see it in action:
tail -f some.log | gvim -R -
please note -R is not mandatory, but useful, as it indicates file beeing read only.

Kate

I'm not really a KDE guy (these days/years :), but this might be be still be useful for those using kate editor:
tail -f some.log | kate -i 

Bash integration

In the comments of the sites (commandlinefu), where I've found the solution for gvim was one more usecase that caught my attention. Having in the .bashrc following:
function gv() {
  $@ | gvim -R -
}
Enables me doing:
gv tail -f some.log
what makes things even more comfortable. Sure you could replace the function name, as well as the editor with the ones you prefer.

Well, that's it, enjoy (or forget it if you find it useless :) or if interested, share how you achieve it with your editor of choice.

XML processing in shell.

Q: Have you ever been struggling with xml processing in shell?
A: There's an elegant tool: Xmlstarlet (http://xmlstar.sourceforge.net/).

Why would you care?

Well, if you:
  • use the shell environment (bash in my case),
  • have a need for XML data extraction/transformation and
  • you know/are willing to learn XPath/Xslt
then you should. In fact it might be the perfect match.

Download/Installation

Follow the official docs. According to the link, if you're Linux (it's easy as things should be, I guess): "Bundled with your nearest Linux distribution"

Usage

Well, I'd recommend to check the:
man xmlstarlet
and
xmlstarlet --help
I'm going to focus on the one part of the functionality only, namely: data extraction, to see the official help, go for:
xmlstarlet sel
Output is quite impressive, giving you even some examples, for those lazy (like me), I copy/paste here:
XMLStarlet Toolkit: Select from XML document(s)
Usage: xmlstarlet sel <global-options> {<template>} [ <xml-file> ... ]
where
  <global-options> - global options for selecting
  <xml-file> - input XML document file name/uri (stdin is used if missing)
  <template> - template for querying XML document with following syntax:
 
<global-options> are:
  -Q or --quiet             - do not write anything to standard output.
  -C or --comp              - display generated XSLT
  -R or --root              - print root element <xsl-select>
  -T or --text              - output is text (default is XML)
  -I or --indent            - indent output
  -D or --xml-decl          - do not omit xml declaration line
  -B or --noblanks          - remove insignificant spaces from XML tree
  -E or --encode <encoding> - output in the given encoding (utf-8, unicode...)
  -N <name>=<value>         - predefine namespaces (name without 'xmlns:')
                              ex: xsql=urn:oracle-xsql
                              Multiple -N options are allowed.
  --net                     - allow fetch DTDs or entities over network
  --help                    - display help
 
Syntax for templates: -t|--template <options>
where <options>
  -c or --copy-of <xpath>   - print copy of XPATH expression
  -v or --value-of <xpath>  - print value of XPATH expression
  -o or --output <string>   - output string literal
  -n or --nl                - print new line
  -f or --inp-name          - print input file name (or URL)
  -m or --match <xpath>     - match XPATH expression
  --var <name> <value> --break or
  --var <name>=<value>      - declare a variable (referenced by $name)
  -i or --if <test-xpath>   - check condition <xsl:if test="test-xpath">
  --elif <test-xpath>       - check condition if previous conditions failed
  --else                    - check if previous conditions failed
  -e or --elem <name>       - print out element <xsl:element name="name">
  -a or --attr <name>       - add attribute <xsl:attribute name="name">
  -b or --break             - break nesting
  -s or --sort op xpath     - sort in order (used after -m) where
  op is X:Y:Z,
      X is A - for order="ascending"
      X is D - for order="descending"
      Y is N - for data-type="numeric"
      Y is T - for data-type="text"
      Z is U - for case-order="upper-first"
      Z is L - for case-order="lower-first"
 
There can be multiple --match, --copy-of, --value-of, etc options
in a single template. The effect of applying command line templates
can be illustrated with the following XSLT analogue
 
xml sel -t -c "xpath0" -m "xpath1" -m "xpath2" -v "xpath3" 
        -t -m "xpath4" -c "xpath5"
 
is equivalent to applying the following XSLT
 
<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template match="/">
  <xsl:call-template name="t1"/>
  <xsl:call-template name="t2"/>
</xsl:template>
<xsl:template name="t1">
  <xsl:copy-of select="xpath0"/>
  <xsl:for-each select="xpath1">
    <xsl:for-each select="xpath2">
      <xsl:value-of select="xpath3"/>
    </xsl:for-each>
  </xsl:for-each>
</xsl:template>
<xsl:template name="t2">
  <xsl:for-each select="xpath4">
    <xsl:copy-of select="xpath5"/>
  </xsl:for-each>
</xsl:template>
</xsl:stylesheet>
 
XMLStarlet is a command line toolkit to query/edit/check/transform
XML documents (for more information see http://xmlstar.sourceforge.net/)
 
Current implementation uses libxslt from GNOME codebase as XSLT processor
(see http://xmlsoft.org/ for more details)
Please note the switch:
-C or --comp              - display generated XSLT
This can really help if you're struggling on particular problem, to see what's really done in the background.

Let's jump directly to examples.

XmlStarlet in action

Imagine sample xml (copied from http://www.w3schools.com/xml/xml_attributes.asp) called sample.xml:
<messages>
  <note id="501">
    <to>Tove</to>
    <from>Jani</from>
    <heading>Reminder</heading>
    <body>Don't forget me this weekend!</body>
  </note>
  <note id="502">
    <to>Jani</to>
    <from>Tove</from>
    <heading>Re: Reminder</heading>
    <body>I will not</body>
  </note>
</messages>


Element value selection

Let's say, we want to extract all 'from' element values. Using:
xmlstarlet sel -t -m "messages" -m "note" -v "from" < sample.xml
we get:
JaniTove
So what have we done?
  • sel - select data or query XML document
  • -t - template definition
  • -m "messages" - match "messages" XPath expression
  • -m "note" - match "note" XPath expression
  • -v "from" - print value of "from" XPath expression
  • sample.xml - test input file used
Do you want to see the xslt used in the background? No problem, just go for:
xmlstarlet sel -C -t -m "messages" -m "note" -v "from" < sample.xml
to see:
<?xml version="1.0"?>
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:exslt="http://exslt.org/common" version="1.0" extension-element-prefixes="exslt">
  <xsl:output omit-xml-declaration="yes" indent="no"/>
  <xsl:template match="/">
    <xsl:for-each select="messages">
      <xsl:for-each select="note">
        <xsl:call-template name="value-of-template">
          <xsl:with-param name="select" select="from"/>
        </xsl:call-template>
      </xsl:for-each>
    </xsl:for-each>
  </xsl:template>
  <xsl:template name="value-of-template">
    <xsl:param name="select"/>
    <xsl:value-of select="$select"/>
    <xsl:for-each select="exslt:node-set($select)[position()&gt;1]">
      <xsl:value-of select="'&#10;'"/>
      <xsl:value-of select="."/>
    </xsl:for-each>
  </xsl:template>
</xsl:stylesheet>
easy, right?

Selecting xml element

Do you need whole element, not just value? Go for:
xmlstarlet sel -t -m "messages" -m "note" -c "from" < test.xml
to see:
<from>Jani</from><from>Tove</from>
Where only new flag introduced is:
  • -c "from" - print value of "from" XPath expression

Attribute values

Let's end the examples with id-attribute values. Let's assume, we're interested in the note's id attribute values:
xmlstarlet sel -t -m "messages" -m "note" -v "@id" < test.xml
should result in:
501502

Conclusion

XmlStarlet is much more powerful that shown in the examples above.
However my goal was not to show it all (if interested, go for official docs) but rather to show you the tool that can help with xml processing in the shell scripts.
It can save you from quite some unneeded complexity possibly introduced by sed/awk/grep solutions as it respects commented sections, ... and brings the full XPath power to your scripting.

Well, I need to admit, that the tool's development is somewhat stalled. Usually, that's not a good indicator. Still I consider it mature and extremely useful.