AJAX + JSON - XML = AJAJ?
It might be fun, but is it a good idea? The open-ils project has been proud of its AJAX-like architecture with a twist - they see XML as too heavy so they use JSON. (AJAJ?)
"We've taken XMLHTTPRequest one step further and added JSON to the mix. JSON (mentioned in this blog previously) is a 'lightweight data-interchange format' (see json.org). It gives us a way to turn program objects into strings, or serialize them. JSON is great for us because it's a lot lighter than XML. It allows us to encode our data with practically no extraneous data clogging the lines. As a quick example, an array converted to JSON would look something like this: [ 1, 2, 3]. Whereas in XML it might appear like so: <array> <item>1</item> <item>2</item> <item>3</item> </array>. Even with a small dataset you see an immediate difference in the number of characters required to encode the object."
I am not quite sure what to make of this. It reminds me of a number of conversations that I have had mainly with older more experienced colleagues who approach me to explain the "XML thing" to them. Almost invariably, they are horrified at the overhead that XML carries with it. Almost invariably, I try to explain that it is worth it.
The open-ils team has also extended JSON with something they call "class hints".
"JSON parsers exist in many languages, and we've developed our own parsers in C, Perl, and Javascript. Why did we write our own, you ask? You guessed it - we took JSON one step further as well. We added what we call class hints to the JSON format. This allows us to parse a JSON string and determine what type of object we're looking at based on a hint encoded as a comment within the object. So, for example, the Javascript JSON parser might receive a JSON string from the server that is encoded with a class hint of 'user'. The JSON parser will then be able to turn the JSON string into a full blown Javascript 'user' object that knows how to access and update the data it contains."
It goes without saying that had they stuck with XML, all of their data objects would have been self describing. In fact, they had to break the JSON standard precisely because it is too concise to be self describing. Was it worth it? Are the open-ils guys and gals just being conservative in putting data optimization before usability, or are they on to something that the AJAX crowd is missing?
inside the man
Subscribe to:
Post Comments (Atom)
Blog Archive
-
▼
2005
(228)
-
▼
July
(12)
- The state of go in Vermont The Birlington Free Pr...
- The Korea Times baduk lessonThe Korea Times is run...
- Hacker Gary McKinnon interviewedSlashot points out...
- What we should do A valuable thought from Schneie...
- Copyright polarization The Register has a detaile...
- Today's public service announcementThe American ro...
- One step closer to the open library system dreamI ...
- Wi-fi bandits beware Many people take a rather c...
- Anonymity of press sources - the greater good?New ...
- Lee Sedol of Korea reclaims Fujitsu CupGo / weiqi ...
- AJAX + JSON - XML = AJAJ? It might be fun, but is ...
- Google Maps iconsI just completed my first hack wi...
-
▼
July
(12)
About Me
- thrashor
- Edmonton, Alberta, Canada
- Returned to working as a Management Consultant, specializing in risk, security, and regulatory compliance, with Fujitsu Canada after running the IT shop in the largest library in the South Pacific.
3 comments:
While I understand your concerns, for I shared them initially, I'll point out a couple things for you to consider:
* How do you, and we, define "usablitiy"?
Open-ILS defines it as "the property of a project that allows work to get done in an efficient manner." Choosing an optimized wire protocol is part of that because the libraries in our consortium are on rather slow network connections. Our JSON base protocol is more than 50% smaller, sometimes as much as 90%, and the deserializer is significantly faster than an XML parser.
* Use the right tool for the job
As a supporting argument to the point above, consider a pure Java application: one would write a JNI based app and then add a SOAP interface to it. Since the main client applications of the Open-ILS backend will be a XUL-based staff client and the DHTML based OPAC, JSON is the tool for the job.
That being said, here's a point which should allay your concerns. Since we've already written the XML version of everything we're doing in JSON, as far as data transfer is concerned, we are planning to add support back in to OpenSRF to allow you to choose which format you'd like the result of a method request to be returned.
One last thing about the "self describing" nature of XML: it's only self describing to humans, not to machines. With the possible exception of the "semantic web" uses of RDF, an application that uses XML as a messaging protocol can only understand and use tags that it has been told about using an XSD or similar authority document. This is exactly the same position we are in when using JSON, with the exception that you, as a human, need to look up the meaning of a field within a hinted structure. I'll definitely accept that one extra step of "indirect documentation" as the price for the measured gain in performance we've received.
Of course, the reason I'm posting this response isn't to fight over XML, or even, really, to defend our design decisions. These decisions are ours to live with, right or wrong. The real reason I am responding is that it's in the best interest of our project to reevaluate our design any time the opportunity arises. I'm very glad you started the discussion on JSON, since it forced me to double check our design and make sure we're moving in the right direction. We all thank you, sincerly, for the constructive sharp-stick-in-eye ;), and I, personnally, hope you'll continue to take enough interest in our project to question our design, and to contribute in other ways down the road.
Cross-posting this to our blog
You have made a strong denfense of your design decision to use JSON in place of XML in client-server communications. For example, libraries in North America are not well known for having high network capacity, so reducing the load on the wire is not a bad idea. My last remaining concern is whether the use of JSON instead of XML will be a barrier to interoperability and extensibility. What I mean is, there are thousands of tools and developers that understand XML, but far fewer that understand JSON. Now, it is not that JSON is hard to understand, parse, or otherwise work with, but I would love to see - in my lifetime - libraries that are compliant with standards in use outside the library world. I would not want JSON to be the next MARC or Z39.50. I agree 100% with Kenton Good in this respect. However, I note that you say that you have built XML flavors of all of your JSON interfaces - will these be maintained throughout your development cycle.
I am a great champion of your efforts. In fact, I will state publically that I predict the success of this project will drive much needed innovation among the commercial ILS vendors. Increased choice and increased innovation will only be good for our libraries.
As for helping out - I would love to do a security review of your architecture and code base, if you are interested.
Cross posted here.
I've been arguing with the inventor of JSON himself about an optional Javascript identifier at the front of a JSON construct, which could go a long way toward a more "self-describing" JSON. I've blogged on it, at first proposing it be called JSON++, but now I have another name in mind, and have completed my first stab at a parser, for Java.
In the meantime google on "JSON++" and you can find my blog post and the relevant usenet article. Feel free to contact me via email and/or don't be surprised if you hear from me when I unveil version 0.1.
Post a Comment