Friday, November 14, 2008

JSyntaxPane on dzone

Greetjan Wielenga posted a very cool entry on dzone here.
It is very cool with nice screenshots and a mini how-to.
Many Thanks Greetjan.

I haven't blogged for a while. I'll try and keep up with writing a bit more than Java Code.

Thursday, July 17, 2008

Back to Bahrain! and a JSyntaxPane fix

Well, this is my last day in Jeddah, and tomorrow I'm on my way to Bahrain, inshallah.

I had some time at the hotel, and since they have very good high speed in-room internet, and I got my laptop, I fired up Ubuntu, got the latest NetBeans loaded and upgraded, and fixed an outstanding issue with JSyntaxPane. So now, it is bug free :-)

Monday, July 07, 2008

Darken web pages for easy reading

Personally, I prefer reading light text on dark background. Most web-pages use black text on white background, including this blog. I'll probably change the theme on this blog later, but I'm actually lazy because of this amazing tool. All you need to do is drag it on your bookmarks toolbar, and voila! One click, and all the text in the page will be grey on black, and links blue on black.

This only works on Firefox. Internet Explorer users; now you have another reason to switch ;-)

Here is the article and the bookmarklet.

Wednesday, July 02, 2008

Great Leaders

Joel Spolsky has a great article about Bill Gates, and being a great leader. It's not that long and definitely worth reading.
The main point is, to be a good and successful leader, you have to know what your company / department / ministry / whatever is doing. You must have passion about what you are doing. How many times have we seen unfit managers who have no clue what their sub-ordinates are doing?
Enough said...

Sunday, June 29, 2008

JSyntaxPane with more Languages

Some more free time and I had time to fix Java and Groovy Lexers to have better String handling. The older versions did not process String Escape Characters properly.

I also added Property file and SQL lexers and did some refactoring by moving the lexers in their own package.

The Build file was modified to also build the Java lexers from the flex files (you need the JFlex.jar file in your ant/lib folder for this to work).

This may be the last version for a while. I may not have time in the next few weeks to manage this.

The source is now available in the Source Tab and only the binary distribution is available as a download. You can still load and execute the Jar file. It will start the tester.

Have fun!

Monday, June 23, 2008

JSyntaxPane new version - now with auto indentation and undo/redo

I had some more free time to work on JSyntaxPane. I did some refactoring, that should not change the way to use it though.
Refactoring was done by creating a new SyntaxDocument to store the Tokens, instead of having the SyntaxView store them. The former way was actually very stupid!
The Document should know about its own tokens, and the View gets the tokens from the Document. That is how it is done now. And this way you can also retrieve data about the SyntaxDocument by knowing its EditorPane control. And no need for listeners!
The new SyntaxDocument also has built in undo and redo.
A new class SyntaxActions was also created with several useful TextActions thrown in. These include Smart and Java Indentation, and mapping the TAB / Shift-TAB to indent / unindent the selected lines. Also mapped the default undo / redo keys to behave properly.
I also modified the Test application, and now it will display the token details under the cursor. This is very helpful for debugging the lexers.

Also changed is the way Fonts are used. Now the font of the Component is used, and you can only change the style (bold/italic) and color. Which means that the entire EditorPane will have one single font face and size. This actually suites my taste, unlike Notepad++ which by default uses different fonts and sizes for comments.

Again, all you need to do is one line to set the EditorKit. All the above is done for you automatically.

Have a look at the Test Application using Java WebStart here. Java 1.5+ is needed.

Project home is on Google Code here.

Hmm.. what next? Scripting? Nah.. I'm probably done, for now. I needed a simple control to edit scripts in my Java Swing applications, and thats probably it. unless...



Wednesday, June 18, 2008

JSyntaxPane is born

Based on demand (can't say popular, yet), I just created a Google Project for JSyntaxPane. The NetBeans project is located as the main download.

You'll find the source under the src folder of the archive.

Here is a break down of what the classes do and how you can use the library in your own projects:

The SyntaxTester is a NetBeans created Main application you can use to see and test how syntax highlighting works. Or you can just start the Jar file in the dist folder.

SyntaxKit: you need to set your JEditorPane control's editorKit to a new instance of this. Just pass the required language as a String to the SyntaxKit constructor:

jEdtTest.setEditorKit(new SyntaxKit("java"));

SyntaxView does all the ugly work of maintaining the Tokens List for the Document and drawing the highlighted document.

SyntaxStyle is used to store various data about the style to use for each TokenType.

And finally SyntaxStyles is just a map of Styles. It has one method to set a Graphics object with the needed Font and Color for a Token.

All the Lexers were created using JFlex and the sources for them are in the JFlex folder of the archive.

Have fun! and please let me know if you find it useful or for any feature requests.

Monday, June 16, 2008

Java Syntax Highlighting with JEditorPane

I've been playing around with implementing a Syntax Highlighting / Coloring Editor or Text control in Java Swing. Just for fun. It would be part of TranScope to edit scripts, mostly Groovy, and to view some TAL / DDL and XML. That was both hard, time consuming and a hell-of-a-lot of fun and rewarding experience. I'll summarize what I did, and if there is demand publish my final code as a project. Most of these topics where new to me before I started.

You may also want to check out JEdit Syntax Package. The project seems dead, but it works.

XML EditorKit:

I started out by reading, and then getting the source for the Batik XML Editor, more here.
It was okay, but just for XML, and it seemed quiet tight to XML, so modifying it for other languages was not easy. But it is a very good library to use by itself. And I used it until I wrote my XML Lexer.

Take One - Dynamic Regex:

I came across this link, which was really helpful in understanding what all this Views, Documents, EditorKits are all about. Please read that link as it is very helpful and to the point. There is also the Sun documentation about the Swing Text API here.

The code I created based on Kees was very simple. I used some regular expressions to get tokens from each line, and whenever a match is found, the Color for that regex is used to color the match. All what is needed is to put the regex and associated colors in a Map, load from a Properties file, and voila! Dynamic highlighting, without any code change and for any language.

It worked perfectly... Almost. There is no need to keep any extra data about the Document, and highlighting does not need to parse anything except the single line being drawn by the View's drawUnselectedText method. This means it is very fast and needs no extra memory. The only problem is that multi-line constructs will NOT work. So multi-line comments are not handled.

This is no big limitation at all in many cases.

Take Two - Lexing + StyledDocument:

Here is where the fun begans. To properly handle multi-line constructs, simple regex matches are not really usable, and very slow. What is needed is a parser or lexer. Java has many of these, including Antlr, JavaCC and JFlex.

I did some research and found JFlex to be the easiest to use for Lexing. Remember I only need to get Tokens and not create a compiler. JFlex was also very easy to use for in-memory characters (from the Document), and very fast. I did some benchmarks, on my work PC: 2GHz, 1Gb RAM with lots of programs running, including NetBeans. Parsing a 200K Document still takes less than 15ms in most cases, and no performance is noticed while typing.

I created my Lexer to return a Token object of this form:

public class Token implements Serializable, Comparable {
public TokenType type;
public int start;
public int length;

// other boilerplate code....

@Override
public int compareTo(Object o) {
Token t = (Token) o;
if (this.start != t.start) {
return (this.start - t.start);
} else if(this.length != t.length) {
return (this.length - t.length);
} else {
return this.type.compareTo(t.type);
}
}
}

TokenType is an enum with all possible token types (OPER, IDENT, KEYWORD, STRING, COMMENT etc.)

So, what I initially did is create a DocumentListener that updates a matching List of Tokens for the Document, whenever the Document is updated.

Whenever the Document is updated I just call the setCharacterAttributes for the all tokens depending on their type.

That worked perfectly. If you have just a few lines. It quickly became very slow for any documents with more than about 100 lines. It also consumed a LOT of memory. The main thing is that updating the styles of a StyledDocument was not designed for this purpose.

When you write code, say you are writing the keyword "public":
  1. type "p", and parse the whole document. p is lexed as an identifier and those attributes are set to it, and everything else.
  2. type "u", same thing, "pu" is still lexed as identifier.
  3. type "b"...
  4. type "l"...
  5. type "i"...
  6. type "c" and now you have a keyword, so you change the char attributes for the whole "public".
Changing attributes is VERY slow in such cases. lots of events are fired and the StyledDocument keeps track of a lot of data about the styles of each character. For a script, or program, you will have a separate style for almost every single word. So you will have a lot of data for even the shortest of scripts. The StyledDocument was not designed for this. It was designed for normal "English" text, where most of it is the same style, except for a header here or a bold word there.

I initially changed the implementation to only call setCharacterAttributes for the modified parts of the Document. This was done by a calculating a Delta of the old and new Token, and then only the changes were used to update the Styles. But the memory use was still too much. And when a big file is opened or pasted on the JEditorPane it took a a while to set all the attributes.

It worked, but I could do better... And I am still having fun, so why stop there?

Take Three - Lexing + PlainDocument:

The final solution is to Lex the entire document whenever it changes (which is very fast) and use a PlainView and PlainDocument implementation to render the text using the drawUnselectedText method.

The code now is structured like this:
class SyntaxKit extends StyledEditorKit implements ViewFactory:
This class is used by the JEditorPane to set the type of text it will show. In NetBeans, I change the EditorKit property to point use an instance of this class. The create method of this class returns an instance of the SyntaxView class below.

class SyntaxView extends PlainView implements DocumentListener
This is the heart of the code. This class maintains a List of Tokens that match the contents of the Document it is to render. It keeps itself in-sync with all document changes by registering itself as a DocumentListener. The insertUpdate and removeUpdate methods are overridden to re-parse, or Lex the entire Document and put the Tokens in the tokens List member of this class. I removed the logic of maintaining a delta. It is fast and less code to maintain. As I said, lexing was not a performance issue at all.

The drawUnselectedText method off this class is called to draw lines of text. This method looks at the tokens and draws them in the proper Fonts, and Colors.

One more thing done in this class is to override the updateDamage method. This is needed so that something like closing a multi-line comment updates not just the last line, but all lines in the view.

If anybody is interested, I'll either put the code on a Google Project or show parts of it here. The project is now tightly integrated with TranScope, but I can spin it off as a separate project and remove the dependencies. There are currently Lexers for Java, Groovy, JavaScript, XML and Tandem / HP NSK TAL. To create your own, you only need to create the Lexer file and run it throw JFlex.

Monday, April 07, 2008

TranScope and Xerialize

If you have no idea what XML, Serialization, or if you only think of Java as the coffee, then leave now. Or just continue if you are curious.

I've been working on a Pet Project for some time now. It's written in Java, using NetBeans, Swing and with Scripting done in Groovy. It's a very generic Transaction Reader and Simulator.

Why? It is just Fun! I'm learning a lot by working on it, and its pretty useful, at least for me.

A Transaction is either a specific kind of a message, or a record in a file.

These transactions are Financial transactions (mostly). But the project is generic enough to handle any message or record, as long as you know what the messages are supposed to be like.

Messages can be any kind of format. I am currently working on BASE24 ISO, BASE24 ATM and POS Internal messages, and some ATM native messages. Other messages like VISA, MasterCard, or any Switch can be easily added to the system. More on this later.

Currently, the system can read ACI XPNET Audit Files, and any Tandem Enscribe file. Just transfer the files to the PC in Binary (or use the built-in transparent FTP feature in the program) to read all records and expand them.

So, what can you do with these messages? Well, the main interface displays the expanded messages as a cool looking Tree, XML, or Hex Dump. You can search the files for any text, or any value in any field. You can output the data as nice XML, or CSV. Or even write a script to put the data to an RDBMS. I used it to extract BASE24 TLF fields (including some Tokens) to an RDBMS for analysis.

The Simulator is still in its early stages under development, but so far, it can open TCP sockets and talk back to a BASE24 host in BASE24 ISO-8583 messages. The Simulator is actually a Groovy script that talks in high level using Structures. It does NOT do any of the cumbersome ISO message packing at all. That is all done by the Message Templates. All you need the Groovy Script is something like:

MSG.dump()
if(MSG.MSG_TYP.value == "0800") {
MSG.MSG_TYP.value = "0810";
bits = MSG.BIT_FIELDS;
// Add the response code field
resp = MSG.BIT_FIELDS.add("39.RESP_CDE");
// and set to 00
resp.value = "00";
// then reply
SRC.write(MSG)
// and we are done.
return true;
}

One thing that I'm planning to do is have the program read some file (Audit or Transaction Log) and create messages to send to a receiver. That will be fun! and will be very useful for testing and trouble-shooting.

Literally anything is possible. Really! I used that!

Okay so how do you define a Message? All messages are defined in XML, and human readable XML.

The XML describes what the message looks like. The program will do the conversion in and out. You do not write any line of code to expand any message. A sample XML snippet that describes a message is:

1 <Struct name="B7">
2 <BNumber size="8" name="RBA"/>
3 <FString size="35" name="TLF_NAM"/>
4 <FString size="1" name="TKN_RETRV_OPT"/>
5 <Struct name="ATM_KEY">
6 <Struct name="CRD">
7 <FString size="4" name="LN"/>
8 <FString size="4" name="FIID"/>
9 <FString size="19" name="PAN"/>
10 <FString size="3" name="MBR_NUM"/>
11 </Struct>
12 </Struct>
13 </Struct>
That XML should be readable and maintainable by humans.

I created some utilities in the program itself to convert DDL (the DDLTAL kind) to XML, because Tandem does not give me XML of its data structures. And because I am too lazy to convert it by hand. I'd spend 2 weeks automating something I can manually do in 2 hours. That's just me.


Which brings me to Xerialize. To create and read the XML, I initially worked with JAXB. I also tried XStream and Simple. Those did not give me the flexibility I needed, specially with the constructs I have. JAXB was not pleasant, but it the job. It is very picky about certain things, specially about polymorphic lists, which i heavily rely on. When I sub-classed some base class, some properties where written to XML twice!. But the last straw was when I upgraded to JDK 1.6u5 (I used u3). My XML was not readable at all. Something was broken, and not just for me. I posted an issue on the JAXB list, and some other people had the same issue. And the issue is not even looked at for a month now!

Screw JAXB, I'll create my own XML. And so Xerialize was born. About four days ago. Xerializer is a very generic library for serializing Java objects using Annotations.

You just annotate the getters and setters they way you need them, and XML is written, and read back. Your annotated getters and setters are called as usual when reading and writing. Discover is all done at runtime using Reflection. Anything that is not annotated is skipped.
No XJC, no Schema compiler, no nested XmlElment for XmlElements. Plain and simple. I currently use DOM to do the XML work, but may change that later.

These are the general rules for using Xerialize:
  1. The tag name is the Java class name, except for some nested lists You can qualify the class name's package using XML namespace. So in the above sample, FString is a Java class.
  2. Simple built-in class properties should be XML attributes. Annotate the ones you need.
  3. Object properties are XML Elements. Annotate the ones you need.
  4. Generic or Polymorphic lists should be handled without any extra configuration. If the FStringV is a sub-class of FString, then I can put that XML element and the reading writing just works.
  5. I know what methods to use for getters and setters, so Xerialize will only use what I annotate. You can actually hide an annotated property in a subclass by overriding the getter and setter and not annotate them.
  6. Default values are null during writing XML. Any null element will not be written. If an element is not in the XML, then the setter will not be called.
  7. Must have public default constructor.
  8. All annotated methods must be public. Xerialize does not use magic to look at private fields or methods.
  9. XML must support namespaces and XInlcude.
  10. Xerialize must support an easy to use method for nested Tree like nodes. A node containing a list of other nodes and so on. All these nodes are of one subclass. In the above example, Struct is superclass for both BNumber and FString.
And there you have it. I just opened a Google Code project and will publish the first draft soon.

Friday, March 28, 2008

Never in Bahrain!


Only in America. And it actually does happen. I've seen people do that.

Of course in Bahrain or the Gulf, we get full-time, live-in maids to get us cups of water...

Efficiency ?

I hear this kind of conversation among women all the time, in almost the exact same sequence:

A: Salam, how are you doing?
B: Salam, very good. and you?
A: Good Alhamdilla. How's X?
B: She's good.
... and this "how are you" continues for some time... It's not the main purpose of the call / conversation at all. Just a preliminary... now, back to A, talking to B.
A: Do you know, Y?
B: Yeah, she's almost due, right?
A: She just gave birth, yesterday...
B: Yooo.. Hamdilla a' salama. How is she?
A: She's okay, tired, but she's very happy.
B: Boy or girl?
A: Boy.
B: Mabrook. she wanted a boy.. what did they call him?
A: Ali. Like his grandfather...

.. and this conversation continues... talks about weight, hair color, eye color, father, and everything else...

There will always be the sequence of at least:

a: X gave birth.
b: Boy or girl?
a: Boy.
b: What did they call him?
a: Ali.

and possibly many others in between.

So, whenever I have to be A: I start by:
A: Salam. Ahmed just became Abu Ali.

There... That's at least three sentences shortened to just one simple, complete information.

That's what 20 years programming will do for you :)

Just Google it!