Akoma Ntoso, HTML5, LegisPro Sunrise, Standards, technology, Track Changes, Uncategorized, W3C

LegisPro Sunrise!!!

LegisPro Sunrise is almost done!. It has taken longer than we had hoped it would, but we are finally getting ready to begin limited distribution of LegisPro Sunrise, our productised implementation of our LegisPro drafting and amending tools for legislation and regulations. If you are interested in participating  in our early release program, please contact us at info@xcential.com. If you already signed up, we will be contacting you shortly.

LegisPro Sunrise is a desktop implementation of our web-based drafting and amending products. It uses Electron from GitHub, built from Google’s Chromium project, to bundle all the features we offer, both the client and server sides into a single easy-to-manage desktop application with an installer having auto-update facilities. Right now, the Windows platform is supported, but MacOS and Linux support will be added if the demand is there. You may have already used other Electron applications – Slack, Microsoft’s Visual Studio Code, WordPress, some editions of Skype, and hundreds of other applications now use this innovative new application framework.

Other versions

LegisProAIn addition to the Sunrise edition, we offer LegisPro in customised FastTrack implementations of the LegisPro product or as fully bespoke Enterprise implementations where the individual components can be mixed and matched in many different ways.

Akoma Ntoso model

AkomaNtoso

LegisPro Sunrise comes with a default Akoma Ntoso-based document model that implements the basic constructs seen in many parliaments and legislatures around the world that derive from the Westminster parliamentary traditions.

Document models implementing other parliamentary or regulatory traditions such as those found in many of the U.S. states, in Europe, and in other parts of the world can also be developed using Akoma Ntoso, USLM, or any other well-designed XML legislative schema.

Drafting & Amending

Our focus is on the drafting and amending aspects of the parliamentary process. By taking a digital-first approach to the process, we are able to offer many innovative features that improve and automate the process. Included out of the box is what we call amendments in context where amendment documents are extracted from changes recorded in a target document. Other features can be added through an extensive plugin mechanism.

Basic features

Ease-of-use

While offering sophisticated drafting capabilities for legislative and regulatory documents, LegisPro Sunrise is designed to provide the familiarity and ease-of-use of a word processor. Where it differs is in what happens under the covers. Rather than drafting using a general purpose document model and using styles and formatting to try to capture the semantics, we directly capture the semantic structure of the document in the XML. But don’t worry, as a drafter, you don’t need to know about the underlying XML – that is something for the software developers to worry about.

Templates

TemplateTemplates allow the boilerplate structure of a document to be instantiated when creating a new document. Out-of-the-box, we are providing generic templates for bills, acts, amendments, amendment lists, and a few other document types.

In addition to document templates, component templates can be specified or are synthesized when necessary to be used as parts when constructing a document.

For both document and component templates, placeholders are used to highlight area where text needs to be provided.

Upload/Download

As a result to our digital-first focus, we manage legislation as information rather than as paper. This distinction is important – the information is held in XML repositories (a form of a database) where we can query, extract, and update provisions at any level of granularity, not just at the document level. However, to allow for the migration from a paper-oriented to a digital first world, we do provide upload and download facilities.

Undo/Redo

undoRedoAs with any good document editor, unlimited undo and redo is supported – going back to the start of the editing session.

Auto-Recovery

Should something go wrong during an editing session and the editor closes, an auto-recovery feature is provided to restore your document to the state the document was at, or close to it, when the editor closed.

Contextual Insert Lists

insertLists

We provide a directed or “correct-by-construction” approach to drafting. What this means is that the edit commands are driven by an underlying document model that is defined to enforce the drafting conventions. Wherever the cursor is in the document, or whatever is selected, the editor knows what can be done and offers lists of available documents components that can either be inserted at the cursor or around the current selection.

Hierarchy

hierarchy

Document hierarchies form an important part of any legislative or regulatory document. Sometimes the hierarchy is rigid and sometimes it can be quite flexible, but either way, we can support it. The Sunrise edition supports the hierarchy Title > Part > Chapter > Article > Section > Subsection > Paragraph > Subparagraph out of the box, where any level is optional. In addition, we provide support for cross headings which act as dividers rather than hierarchy in the document. Customised versions of LegisPro can support whatever hierarchy you need — to any degree of enforcement. A configurable promote/demote mechanism allows any level to be morphed into other levels up and down the hierarchy.

Large document support

Rule-making documents can be very large, particular when we are talking about codes. LegisPro supports large documents in a number of ways. First, the architecture is designed to take advantage of the inherent scalability of modern web browsing technology. Second, we support the portion mechanism of Akoma Ntoso to allow portions of documents, at any provision level, to be edited alone. A hierarchical locking mechanism allows different portions to be edited by different people simultaneously.

Spelling Checker

checkSpelling

Checking spelling is an important part of any document editor and we have a solution – working with a third-party service we have tightly integrated with in order to give a rich and comprehensive solution. Familiar red underline markers show potential misspellings. A context menu provides alternative spellings or you can add the word to a custom dictionary.

Tagging support

Tagging

Beyond basic drafting, tagging of people, places, or things referred to in the document is something for which we have found a surprising amount of interest. Akoma Ntoso provides rich support for ontologies and we build upon this to allow numerous items to be tagged. In our FastTrack and Enterprise solutions we also offer auto-tagging technologies to go with the manual tagging capabilities of LegisPro sunrise.

Document Bar

docbar

The document bar at the top of the application provides access to a number of facilities of the editor including undo/redo, selectable breadcrumbs showing your location in the document, and various mode indicators which reflect the current editing state of the editor.

Command Ribbons and Context Menus

menuRibbons

Command ribbons and context menus are how you access the various commands available in the editor. Some of the ribbons and menus are dynamic and change to reflect the location the cursor or selection is at in the editor. These dynamic elements show the insert lists and any editable attributes. Of course, there is also an extensive set of keyboard shortcuts for many commonly used commands. It has been our goal to ensure that the majority of the commonly used documenting tasks can be accomplished from the keyboard alone.

Sidebar

SidebarA sidebar along the left side of the application provides access to the major components which make up LegisPro Sunrise. It is here that you can switch among documents, access on-board services such as the resolver and amendment generator, outboard services such as the document repository, and where the primary settings are managed.

Side Panels

sidePanel

Also on the left side are additional configurable side panels which provide additional views needed for drafting. The Resources view is where you look up documents, work with the hierarchy of the document being edited, and view provisions of other documents. The Change Control view allows the change sets defined by the advanced change control capabilities (described below) to be configured. Other panels can be added as needed.

Advanced features

In addition to the rich capabilities offered for basic  document editing, we provide a number of advanced features as well.

Document Management

Document management allows documents to be stored in an XML document repository. The advantage of storing documents in an XML repository rather than in a simple file share or traditional content management system is that it allows us to granularise the provisions within the documents and use them as true reference-able information – this is a key part of moving away from paper document-centric thinking to a modern digital first mindset. An import/export mechanism is provided to add external documents to the repository or get copies out. For LegisPro Sunrise we use the eXist-db XML database, but we can also provide customised implementation using other repositories.

Resolver

Our document management solution is built on the FRBR-based metadata defined by Akoma Ntoso and uses a configurable URI-based resolver technology to make human readable and permanent URIs into actual URLs pointing to locations within the XML repository or even to other data sources available on the Web.

Page & Line numbering

pagination

There are two ways to record where amendments are to be applied – either logically by identifying the provision or physically by page and line numbers. Most jurisdictions use one or the other, and sometimes even both. The tricky part has always been the page and line numbers. While modern word processors usually offer page and line numbers, they are dynamic and change as the document is edited. This makes this feature of limited use in an amending system. What is preferred is static page and line numbers that reflect the document at the last point it was published for use in a committee or chamber. We accomplish this approach using a back-annotation technology within the publishing service. LegisPro Sunrise also offers a page and line numbering feature that can be run without the publishing service. Page and line numbers can be display in the left or right margin in inline, depending on preference.

Amendment Generation

One of the real benefits of a digital first solution is the many tasks that can be automated – not by simply computerising the way things have always been done, but by rethinking the approach altogether. Amending is one such area. LegisPro Sunrise incorporates an onboard service to automatically generate amendment documents from changes recorded in the target document. Using tracked changes, the document hierarchy, and annotated page and line numbers, we are able to very precisely record proposed changes as amendments. Of course, the amendment generator works with the change sets to allow different amendment sets to be generated by specifying the named set of changes.

Plugin Support

plugins

LegisPro Sunrise is not the first incarnation of our LegisPro offering. We’ve been using the underlying technologies and precursors to those technologies for years with many different customers. One thing we have learned is that there is a vast variation in needs from one customer to another. In fact, even individual customers sometime require very different variations of the same basic system to automate different tasks within their organisation. To that end, we’ve developed a powerful plug-in approach which allows capabilities to be added as necessary without burdening the core editor with a huge range of features with limited applicability. The plugin architecture allows onboard and outboard services to be added, individual commands, menus, menu items, side panels, mode indicators, JavaScript libraries, and text string libraries to be added. In the long term, we’re planning to foster a plugin development community.

Proprietary or Open Source?

There are two questions that always come up relating to our position on standards and open source software:

  1. Is it based on standards? Yes, absolutely – almost to a fault. We adhere to standards whenever and however we can. The model built into LegisPro Sunrise is based on the Akoma Ntoso standard that has been developed over the past few years by the OASIS LegalDocML technical committee. I have been a continual part of that effort since the very beginning. But beyond that, we always choose standards-based technologies for inclusion in our technology stack. This includes XML, XSLT, XQuery, CSS, HTML5, ECMAScript 2015, among others.
  2. Is it open source?
    • If you mean, is it free, then the answer is only yes for evaluation, educational, and non-production uses. That’s what the Sunrise edition is all about. However, we must fund the operation of our company somehow and as we don’t sell advertising or customer profiles to anyone, we do charge for production use of our software. Please contact us at info@xcential.com or visit our website at xcential.com for further information on the products and services we offer.
    • If you mean, is the source code available, then the answer is also yes – but only to paying customers under a maintenance contract. We provide unfettered access to our GitHub repository to all our customers.
    • Finally, if you’re asking about the software we built upon, the answer is again yes, with a few exceptions where we chose a best-of-breed commercial alternative over any open source option we had. The core LegisPro Sunrise application is entirely built upon open source technologies – it is only in external services where we sometimes rely up commercial third-party applications.

1200px-HTML5_logo_and_wordmark.svg         CSS            JavaScript-logo

nodejs-new-pantone-black
AngularJS_logo.svg
electron
What does it cost?

As I already alluded to, we are making LegisPro Sunrise available to potential customers and partners, academic institutions, and other select individuals or organisations for free – so long as it’s not used for production use — including drafting, amending, or compiling legislation, regulations, or other forms of rule-making. If you would like a production system, either a FastTrack or Enterprise edition, please contact us at info@xcential.com.

Coming Soon

Book

I will soon also be providing a pocket handbook on Akoma Ntoso. As a member of the OASIS LegalDocML Technical Committee (TC) that has standardised Akoma Ntoso, it has been important to get the handbook reviewed for accuracy by the other TC members. We are almost done with that process. Once the final edits are made, I will provide information on how you can obtain your own copy.

Standard
Akoma Ntoso, How To, Standards, Uncategorized

Using the <hcontainer> Element Properly

When I started my blog five years ago, I said would try not to get too technical. Overall, I’ve stuck to that. However, with Akoma Ntoso now essentially standardised, I think it is time to start covering some areas of it in a little more technical detail. So, from time to time, I’m going to delve into a little technical mumbo jumbo to cover some subjects that come up frequently.

In this blog, I want to cover the proper use of the <hcontainer> element. Akoma Ntoso has rich support for hierarchical documents as legal documents tend to be strongly hierarchical. Consequently, there is a large selection of element tags to choose from. During the standardisation effort, we tried to identify as many hierarchical constructs we could find in legal documents, but it was impossible to identify every single construct in every single jurisdiction around the world. Indeed, we sometimes decided that some hierarchical levels were just too unique to a specific jurisdiction to warrant inclusion in a standard intended for worldwide adoption. Sometimes, having too many tags is worse than having not enough, especially when there is a way to handle the outlier cases.

So, what is the proper way to use the <hcontainer>? The <hcontainer>, or hierarchical container, is a generic element intended to be use to invent and element that is needed but not found among the existing Akoma Ntoso hierarchical elements. The @name attribute is used to define the name of the new element you’re inventing. For this reason, the value of the @name should be consistent with the element naming convention of Akoma Ntoso:

  • The name should be lowerCamelCase.
  • The name should be in British English rather than another variant of English or another language. (Yes, we have two exceptions to this rule in Akoma Ntoso, one because the English form didn’t exist and one because we didn’t notice a spelling variation)
  • The name should not already exist in Akoma Ntoso.

One question that comes up from time to time is whether an <hcontainer> can be used to define an element that already exists, but in another language. For instance, could I define <hcontainer name=”artículo”> to define a Spanish article rather than use <article>? While there is nothing that prevents this practice, that would not be in the spirit of Akoma Ntoso. A large part of the motivation of Akoma Ntoso is to promote both data and tool interoperability. Localising the element tags completely undermines Akoma Ntoso as a standard. You might as well simply use your own schema. Please consider the consumers of your data when facing this question, not just the producers.

[We use an alternate mechanism, provided by our tools, to present a localized term to the user rather than the element name.]

Another question I’ve have been has to do with hierarchical levels that might not have a formalised name at all. I’ve come across this a number of times, in a number of ways. First, it’s often an issue of very old documents where the document hierarchy was either not formalised or not explicitly stated and conversion involves some degree of guessing. Second, there are sometimes lower levels, for instance, below the section level, where the level names have simply not been formalised or are used inconsistently. Third, I’ve come across a case where the upper levels, above the section, were not named in because the corresponding concepts didn’t really exist in the language used in that jurisdiction. For these cases, we use the <level> element.

The <hcontainer> is a very useful element in Akoma Ntoso. It’s a key part of the design of the schema that allows it to be easily adapted to any legislative tradition. However, it should be used judiciously — only when there isn’t already an alternative.

 

Standard
LEX Summer School, Process, technology, Uncategorized

Escaping a Technology Eddy

Do you need to escape a technology eddy? In fluid dynamics, an eddy is the swirling of a fluid that causes a reverse current against a downstream flow. It often forms behind a major obstacle. The swirling motion of an eddy creates resistance to forward motion by creating a backward force. Eddies are also seen in air and electromagnetic systems.

I see a similar phenomena in my work that I’m going to coin a technology eddy. A technology eddy forms in organisations that are risk adverse, have restricted budgets, or simply are more focused on software maintenance of a major system rather than on software development. Large enterprises, in particular, often find their IT organisations trapped in a technology eddy. Rather than going with the flow of technological change, the organisation drifts into a comfortable period where change is more restricted to the older technologies they are most familiar with.

TechnologyEddy

As time goes by, an organisation trapped in a technology eddy adds to the problem by building more and more systems within the eddy — making it ever more difficult to escape the eddy when the need arises.

I sometimes buy my clothing at Macy’s. It’s no secret that Macy’s, like Sears, is currently struggling against the onslaught of technological change. Recently, when paying for an item, I noticed that their point-of-sale systems still run on Windows 7 (or was that Windows Vista). Last week, on the way to the airport, I realised I had forgotten to pack a tie. So, I stopped in to Macy’s only to find that they had just experienced a 10 minute power outage. Their ancient system, what looked to be an old Visual Basic Active Directory app, was struggling to reboot. I ended up going to another store — for all the other stores in the mall were up and running quite quickly. The mall’s 10 minute power outage cost Macy’s an hour’s worth of sales because of old technology. The technology eddy Macy’s is trapped in is not only costing them sales in the short term, it’s killing them in the grand scheme of things. But I digress…

I come across organisations trapped in technology eddies all the time. IT organisations in government are particularly susceptible to this phenomena. In fact, even Xcential got trapped in a technology eddy. With a small handful of customers and a focus on maintenance over development for a few years, we had become too comfortable with the technologies that we knew and the way in which we built software.

It was shocking to me when I came to realise just how out-of-date we had become. Not only were we unaware of the latest technologies, we were unaware of modern concepts in software development, modern tools, and even modern programming styles. We had become complacent, assuming that technology from the dawn of the Millennium was still relevant.

I hear a lot of excuses for staying in a technology eddy. “It works”, “all our systems are built on this technology”, “it’s what we know how to build”, “newer technologies are too risky”, and so on. But there is a downside. All technologies rise up, have a surprisingly brief heyday, and then slowly fade away. Choosing to continue within a technology eddy using increasingly dated technology ensures that sooner or later, an operating system change or a hardware failure of an irreplaceable part will create an urgent crisis to replace a not-all-that-old system with something more modern. At that point, escaping the eddy will be of paramount importance and you’ll have to paddle at double speed just to catch up. This struggle becomes the time when the price for earlier risk mitigation will be paid — for now the risks will compound.

So how do you avoid the traps of a technology eddy? For me, the need to escape our eddy became most apparent as we got exposed to people, technologies, and ideas that were beyond the comfortable zone in which our company existed. Hearing new ideas from developers beyond our sphere of influence and being exposed to requirements from new customers made us quickly realize that we had become quite old-fashioned in our ways. To stay relevant you must get out and learn — constantly. Go to events that challenge your thinking rather than reinforce it.

Today we are once more a state-of-the-art company. We’ve adopted modern development techniques, upgraded our tools, upgraded our technologies, and upgraded our coding skills. These changes allow us to compete worldwide and build software for multiple customers in a fully distributed way that spans companies, continents, and time zones.

I hope we’ll remember this lesson and focus more on continuous improvement rather than having to endure a crash course of change every few years.

 

Standard
Standards, technology, Uncategorized, W3C

The many lives of JavaScript

I recently worked out that I’ve learned, on average, a new programming language every two to three years. These many languages have been part of my toolbox for somewhere between four to six years before falling away to make room for new technologies. However, there is one programming language that has been a major part of my programming repertoire for almost 22 years now – and that is JavaScript.

My JavaScript programming skills have recently undergone a major renaissance as I’ve adopted JavaScript 6 (aka ECMAScript 2015), for most of my coding. The way I write code today is nothing like the code I wrote just one year ago – and I’ve gone back and largely modernised all active code to be consistent. Today’s programming style uses modern frameworks and is far more object oriented and asynchronous. There are many new features which have totally updated how I write code. Proper (while still limited) classes with mixins have replaced the ugly prototype mechanism I used to use for object orientation. Let and const declarations have caught latent bugs that were hidden in my code. Arrow functions (aka. lambda expressions) and promises have streamlined code that once was quite clunky. The list goes on…

Even my tools have changed. Microsoft’s surprisingly excellent Visual Studio Code has replaced the hodgepodge of tools I once used. We’re in the process of integrating Jasmine and Karma to the process. JavaScript Semistandard Style (no, I still like semicolons) has ensured a very clean code base – as well as catching a multitude of errors and sins.

LifeOfJavaScript

All this change got me thinking about the four lives of JavaScript that I have worked through. Way back when, JavaScript had an awkward birth at the hands of Netscape as the lesser stepchild of the new Java programming language from Sun that was taking away all the attention. JavaScript was just a way to glue Java applets together in the browser. The problem is, Java applets really sucked.

Microsoft quickly saw the value of JavaScript though, and launched their own effort to steal Netscape’s baby. And so, JavaScript was stolen, renamed JScript and made to be the adopted sibling of Microsoft’s other scripting language, VBScript. One good bit progress that Microsoft made was to sponsor the standardisation of the language, although the resulting name of ECMAScript was another in a long string of unfortunate names the language has had to endure.

As JScript, JavaScript was to become an integral part of Microsoft’s entire ActiveX strategy. A lot of really cool technologies (yes, really) came of this allowing JScript to go beyond the browser. As an application extension language, it found its way into the XMetaL XML editor as the customisation technology. We used it and many of the ActiveX technologies to great effect when we implemented California’s bill drafting system. However, it didn’t just end there. We were able to use it on the server-side through Classic ASP and as a shell scripting language through the Windows Script Host. For a Microsoft-centric programmer, this era of JavaScript was a glorious one.

However, ActiveX was seriously flawed. It was entirely proprietary and riddled with problems. Microsoft abandoned it almost as quickly as they had adopted it – moving on to .Net where JScript.net was a non-starter. As Microsoft’s interest in ActiveX and even Internet Explorer waned in the early 2000s, life as a JavaScript programmer became ever gloomier. While the capabilities were awesome, there was obviously no future.

At this point, we made the somewhat painful decision to move away from Microsoft’s outmoded view of the Internet and go back to the basics. While it meant giving up a lot of capability, in the end it was an excellent decision for it pointed to the future. One tiny aspect of Microsoft’s ActiveX vision, the XMLHttpRequest object, escaped from Microsoft and gave rise to a whole different way of programming – Asynchronous JavaScript and XML (AJAX). This development and the emergence of new browsers, first Firefox and then Google’s Chrome with its V8 JavaScript engine, breathed new life into JavaScript.

Freed from Microsoft’s grip, JavaScript has flourished. The past decade has seen a plethora of new technologies. Isomorphic JavaScript (or Universal JavaScript) blurs the distinction between coding for the server and the client. In fact, technologies like Electron turn web-based application development back to the desktop where you can get the best of both worlds.

When I look back on the code I wrote during the ActiveX era (yes, we still support it), it looks prehistoric. Modern JavaScript is so much more capable and flexible than the clunky rendition we had back when COM-based ActiveX was supposed to change the world. As I mentioned earlier, how I program now is completely different – asynchronous programming is a difficult but very worthwhile skill to acquire.

Looking to the future, I see three paths. On one side is a mature but polarising platform that is dominated by Oracle. Oracle’s dominance ensures stability but also deters innovation. Looking to the other side, one finds another mature but polarising platform that is dominated by Microsoft. Here too, Microsoft’s dominance ensures stability but also deters innovation. The result is that it seems that both paths have now had their heyday. You don’t hear very much aspirational news from either technology path anymore — what it must have felt like programming a mainframe in COBOL at the height of the C/C++ era.

The third path seems to be the path of the future – staking out a middle ground that neither technology giant can stomp on. Sure, Google is a technology giant that plays a strong role, but they’re still reasonably well regarded by the development community at large (for now). It is this middle ground that has been the most fertile for new technologies – and JavaScript is right in the thick of it. There are so many new technologies it’s hard to keep track of them all — AngularJS, Node.js, React, Express.js, to name but a few. While this third path can play well with both of the other two, for me it is the path that truly points to the future.

This brings us to the fourth life for JavaScript – building on the momentum of the past decade to mount a credible challenge for enterprise apps. While I initially dismissed many of the new features of the language as mere syntactic sugar, my experience with it has shown it to be more. I now write much better code. I believe we’re on the verge of an explosion in JavaScript-enabled applications that will blur the distinction between the platforms, between the desktop and the browser, and between the server and the client. This is truly an exciting time, once more, to be developing in JavaScript.

It goes without saying, but stay tuned for more…

Standard
Akoma Ntoso, Standards, Uncategorized

Implementing an Akoma Ntoso Editor

Yes, we’ve now built a full real-world legislative drafting editor using the final release of the new OASIS standard for legislative XML known as Akoma Ntoso. No, it wasn’t easy, but drafting tools never are. While our project is not yet a finished implementation, it shows that Akoma Ntoso is adaptable to some of the most challenging demands it will face as a world-wide standard for digital legislation.

Akoma Ntoso is a very ambitious standard. It strives to anticipate all the possible needs that jurisdictions around the world will have while also planning for a wide range of useful applications that can be built on top of the data. The result is a sophisticated schema with many more features than any one implementation will ever need.

The trick is being able to mould Akoma Ntoso to fit the unique needs of a jurisdiction while also providing a user experience that is natural and fits the problem space exactly. This was the challenge that led us to develop a custom web-based XML editor. After surveying the available market of web-based editors, we quickly found that none would be sufficiently adaptable to allow Akoma Ntoso to realize its true potential.

There are two aspects of building an Akoma Ntoso editor that have required particular attention:

  1. Adapting Akoma Ntoso to fit the jurisdiction’s Documents
    If you’ve taken a look at Akoma Ntoso, you know that it’s jam-packed full of tags and features, far more than are ever necessary in a single implementation. Trying to create a single comprehensive implementation of it all, a one-size-fits-all approach, will only yield an overly complicated and unusable tool that will be suitable to nobody. At the same time, despite Akoma Ntoso’s efforts to cover all possible scenarios, there are still gaps in the schema where specifics details to individual jurisdictions are not covered. Akoma Ntoso anticipates this shortcoming by providing a pattern-centric mechanism for extending a set of generic elements to fill in the gaps.
    AkomaNtosoSubset.pngAn authoring tool needs to hide or omit the unused parts of Akoma Ntoso, adapt the parts that are being used to fit the specific requirements of a jurisdiction, and allow for extension of Akoma Ntoso using the generic mechanism for extension in such a way that these extensions would appear to be seamless. As it turns out, almost a third of the elements we’ve implemented are extension elements. The result is an editor that allows a fully compliant Akoma Ntoso document to be drafted (correct by construction), while at the same time ensuring that the document fully complies with the jurisdiction’s model for how that document be represented.
  2. Adapting the editor to fit the jurisdiction’s Document
    XML authoring tools don’t just work out of the box. Rather, they’re toolkits that allow documents that conform to a specific schema or model to be authored. How much flexibility this toolkit provides dictates the type of documents that can be authored. Sadly, it’s difficult for any editor to provide infinite flexibility in any dimension – so very careful consideration is necessary to understand whether or not the editor can be adapted to the need.When we at Xcential implemented California’s bill drafting system a decade ago, we used XMetaL because it provided an extensive customisation capability. Unfortunately, at the outset we failed to realize that XMetaL’s change tracking capabilities were limited and not customisable. When the full challenge of redlining became clear to us well into the project, we realized we were using an editor that couldn’t do the job. Thankfully, the project was able to get (and pay for) the necessary extensions to XMetaL without too much delay.

    One way to understand this problem is in the diagram below. On the left is the intrinsic capability offered by the authoring tool. On the right is a jurisdiction’s requirement. As XML authoring tools are toolkits, there is always a gap between the intrinsic capabilities on the left and the requirements on the right – and this gap must be closed one way or another. One way is to using any programming API offered to add customisations (shown as A). Another way is to limit the jurisdiction’s requirements (shown as B) to better suit the capabilities of the tool. Usually, it takes a combination of both to arrive at a suitable outcome. If the gap cannot be closed (shown as C), then the project is likely doomed to disappointment or even failure.EffortVsCapability.png

    One thing we learned early on is that, when it comes to legislative documents, there really isn’t a lot of wiggle-room in the requirements. The form of the documents is often dictated by long established traditions and good luck trying to change that. This is one case where the expression “It will take an Act of Congress” can be quite literally true.

    This means that the gap will have to be closed through customization and the effort (and risk) to do so will be quite substantial. XMetaL, way back in 2002, provided an extensive set of programmatic APIs to work from, and that very nearly wasn’t enough. Unfortunately, the newer web-based editors haven’t, for many reasons, come close to matching XMetaL’s level of customisability.

Building our own authoring tool

Understanding the challenges of Akoma Ntoso, our customer’s demanding requirements, and the limitations of the state-of-the-art in web-based authoring tools, we embarked on a project several years ago to build our own XML authoring tool. The result is now used in a number of applications. It’s been quite a challenge – and that’s an understatement. Building a highly configurable web-based XML authoring tool that is truly a step ahead of the old desktop editors of twenty years ago has required us to truly harness every aspect of modern web technologies and methodologies.

The result is an XML authoring tool especially adapted to the needs of Akoma Ntoso. However, it’s not just an Akoma Ntoso editor. It’s an XML authoring tool, capable of adapting to any reasonable XML scheme — for the legislative field, regulatory field, or any similar field where the demands of structured documents require a sophisticated level of customization.

If you want to see our tool in action in a bespoke implementation, here’s an early peek:

https://www.youtube.com/watch?v=CTAad2E-9Y4&feature=youtu.be

(This link shows a dated version at this point. It shows the editor as it was around December of 2016. We’ve advanced quite a bit since then — in both the intrinsic capabilities of the editor an in the capabilities built into the bespoke customisation)

Standard
Process, Uncategorized

Becoming Agile

Lately we’ve become quite Agile. More and more, our government customers have started to impose Agile methodologies on us. While I’ve always thought of our existing methodologies as being quite nimble, adopting Agile and Scrum methodologies has required some adaptation on my part.

Early in the game, I started to find Agile to be more of a hindrance than a help. The drumbeat of each sprint was wearing me out – and I started to feel the inevitable effects of burnout creeping into the my every thought.

But then a remarkable thing happened. I found myself not only defending Agile, but advocating it for our other projects. I was quite surprised to find myself having become such a big supporter. So what changed?

Early on, Agile was new for all of us. Our team was new, geographically distributed in three different parts of the world, all 8 hours apart. That team consisted of representatives from a set of customers and several partners all learning to work together to build a challenging solution. We adopted the Scrum methodology and planned out a long series of two week sprints. Each sprint had a set of stories assigned to it as we set off to build the most awesome bill drafting system of all time.

ProgressVsRefinement

The problem was that the pace was too aggressive. In a software development project, you need to manage two different aspects – making forward progress by adding features while ensuring a sound implementation through refinement. Agile methodologies lean away from lots of up-front design. This makes it possible to show lots of forward momentum early, but the trade-off is that the design will need to be refactored often as new requirements are uncovered and added to the picture. We were too focused on the forward momentum and were leaving a trail of unfinished “programming debt” in our wake. This debt was causing me increasing anxiety as time marched on.

There is an important concept in Agile Scrum called the retrospective. It’s all about continuous improvement of the process. As we’ve grown as a team, we’ve become better at implementing retrospectives. These led to the most important change we’ve made – moving from a two week to a three-week sprint. We didn’t just add time to our sprints, we fundamentally changed the structure of a sprint. We still schedule two weeks’ worth of tasks to each sprint, but rather than just assuming that everything will work out just perfectly, we leave a week open for integration, testing, and development slack to be taken up by any refactoring that may have become necessary.

BritSprint

This third week, while arguably slowing us down, ends up helping by allowing us to emerge from each sprint in far better development shape to begin the next sprint. We just have to be disciplined enough to not try and squeeze regular development tasks into that third week. By working down programming debt continuously, subsequent sprints become more predictable. For various reasons, we temporarily returned to two week sprints and the problem of accumulating programming debt returned. The lesson learned is that you can’t build a complex system on top of a rickety foundation – you must continuously work to ensure a robust base upon which you are building. Without this balance, Agile just becomes a way to expedite a project at the expense of good development practices.

Another key change has been in how we use tools that help to do our work. As I mentioned earlier, our development teams are very distributed – around the world. It’s important that we be able to communicate very effectively despite the distance. Daily stand-ups with the entire team are not possible although we do ensure at least two meetings each sprint with the whole team. We use four primary tools – GitHub as our source code repository, AWS for our development and test servers, Slack for casual day-to-day conversation, and JIRA for managing the stories and tasks. It is the use of JIRA that has taken the most adaptation. Our original methodology was quite clumsy, but with each sprint we refine our usage to the point that it has become a very effective tool. Now, a dashboard presents me with a very clear picture of each sprint’s goals and everyone can monitor the progress towards those goals as the sprint progresses – there are no surprises.

Agile and Scrum are allowing a disparate group of customers and vendors to become a very highly performing software development team. We’re far from perfect, but with every sprint we learn more, make changes, and emerge as a better team than before.

 

Standard
Uncategorized

The (Supposed) Limitations of XML

It’s been a while since I updated my blog – a whole year in fact. The reason is that I’ve been hard at work finishing our web-based XML editor, LegisPro, supporting our projects with the U.S. House, while simultaneously developing an Akoma Ntoso-based implementation for the U.K. and Scottish Parliaments. The challenge has been all-consuming.

Next week I will be giving a couple of talks with Matt Lynch of the Scottish Parliament at the LEX Summer School 2016 in Ravenna, Italy and then, the following week, by myself at NALIT 2016 in Indianapolis. My company, Xcential, also intend to show glimpses at our booth the Data Transparency Conference in Washington D.C. on the 28th September. We’ve got a busy month ahead of us.

Recently the tired old question of whether a legislative drafting system is best built on a word processor or using true XML technology was raised yet again. (No, the Open Document Format (ODF) and  Office Open XML don’t make word processors into XML editors.) To me, and everyone I interact with, the answer is quite clear and was settled a decade ago – XML is the way to go. The reason is simple. XML provides a long-lasting data format that can be used to build a comprehensive solution that enables all of the required automation features for legislative drafting. On the other hand, shoe-horning a legislative drafting application into a word processor, never designed for this type of an application, results in too many compromises.

In reliving the debates from 10 years ago, I stumbled across a competitor’s white paper on the subject. While they still promote the white paper, its content is quite dated. It makes the case for the word processor approach rather than XML. I read, with some amusement (or was it irritation), all the perceived shortfalls of XML.

I thought it would be fun to take a look at each of these supposed problems with XML, and provide a counterpoint to each of them. To be fair, this paper was written several years ago and technology doesn’t stand still.  So, here goes:

Point 1: Legislative content and presentation cannot be separated

There is a thread of truth to this. Because so much of the amending process is based around the page and line number paradigm for referring to locations, it is essential that there be a robust and precise means for referring to any part of the document right down to a specific word. However, this is the entire requirement – there is no further need for the presentation to be tied directly to the content. Legislatures including California and the U.S. Congress, have used markup technologies for many decades now, long before the advent of XML (or even SGML). If the requirements mandated that content and presentation not be separated, none of these solutions would have been viable. So let’s consider the specific issue – how to tie page and line numbers with the content. Superficially, a word processor does this with an intrinsic page and line numbering capability. However, you quickly discover a problem – legislation requires page and line numbers fixed to locations in the last official publication. The dynamic recalculating nature of the intrinsic page and line number capability of a word processor renders the capability useless. Instead, the classic way to address this requirement is to produce a separate rendition of the document using a hidden tabular format, one row per each published line and a column for line numbers and a column for content. However, this creates a huge problem – you now have two copies of the legislation, one organized by the document structure and one organized by physical layout. Getting between these two representations precisely becomes a troubling problem to deal with. For XML, this was also a challenge until we came up with a very clean and workable solution almost a decade ago. Now, when we publish the document PDF, we back-annotate unobtrusive markers into the XML. These markers are used to arrange the editor presentation as well as to drive the amendment engine. This works out very nicely. We have implemented this technique several times now with great success.

Point 2: Temporal relationships must be preserved

This one made me laugh. For years, we’ve pointed to issues like this as reasons to go to XML rather than to avoid it. The argument made in the white paper is that XML provides no facilities to model the temporal relationships that are necessary when making citations or establishing other relationships that exist in legislation.  While this is true, it’s also quite misleading. To expect XML to intrinsically provide this facility is to completely fail to understand the role of XML. In fact, word processors have no intrinsic capability to solve this problem either – it’s something that has to be built.

We’ve been addressing this problem since our beginning in 2001 using web-based references or URIs and a clever middle-tier technology we call a resolver that interprets the temporal or versioning aspects of the citation or reference. This problem was solved long ago.

Point 3: Permanence is required

The argument here is confusing. It is totally true that there is an unbendable requirement that legislation be preserved in a form known to last forever (or at least for many centuries). It’s also totally true that there is no digital technology with the proven permanence of paper or vellum. However, this is an argument about the medium used to preserve the content. It’s not an argument about the type of format that should be used to create the content – unless there is an argument that we should give up on digital technologies and return to paper, scissors, and glue. A physical document can be produced for archival purposes regardless of the technology used to draft it.

Personally, it you were to ask me, the logical document to archive would be a vellum printout of the XML. That way, you would have a much easier task of restoring the document at some future date should some catastrophe result in the loss of the digital record. However, I don’t think that’s a likely decision that anyone will make anytime soon. John? 😉

Point 4: Work-in-progress is structurally broken by XML’s rules

This has long been the primary argument against XML editors. Their rigidity in enforcing the rules often gets in the way, especially in the early part of the drafting cycle when the ideas are still fluid. Most XML editors just aren’t designed for this type of work.

This limitation has been one that I’ve focused considerable effort at overcoming for years. Our most recent efforts have made tremendous strides. We’ve tackled this problem in two ways:

First, using the modern DOM Range constructs that are inherent in modern browsers, we’ve been able to loosen up the selection model considerably so that it closely matches selection in a word processor. Using sophisticated programming of the DOM and this range mechanism, we able to match much of the loose editing offered by a word processor.

Second, we go beyond the word processor by allowing the structure to be removed entirely and allowing the words to be rearranged entirely unencumbered by any structure at all (think text editor). Once the words are rearranged and the user is ready to move on, we automatically recreate the structure for the user. It’s a great way for drafters to get their ideas down without worrying about detail. Turns out, it’s also a great way to import foreign content – a nice bonus.

Point 5: XML document validation is insufficient

This is another one that made me laugh. As before, it’s totally true and entirely misleading. XML validation is not intended to be the be-all and end-all of verifying a document. It would be quite remarkable if an XML schema could do that. Curiously, a word processor offers nothing whatsoever in this regard – it must all be custom created.

When it comes to the subject of verifying a document, we use two terms – validation and verification. Validation is the process of ensuring that the document’s XML content adheres to the content model prescribed by the schema. We call this the “outer envelope” of checks. The “inner envelope” is to verify that the document adheres to the jurisdiction’s internal business rules. While off-the-shelf technologies exist to perform the outer XML validation, this inner verification step requires custom software. We’ve built a configuration mechanism that allows us to configure a “model” that our existing software can use for verification rather than building this from scratch each time.

Point 6: There is no common language around which to develop a standard

This one is perhaps the most annoying. To be fair, the white paper is not current and could do with an update – although doing that might undermine its use as a marketing tool. Again, as before, there is some truth to the assertion, but to argue that the differences between jurisdictions disqualifies XML is to see the glass as half empty rather than half full. I’ve been in this field for 15 years in November and have worked on legislative systems on four continents. What’s surprised me is how similar they are rather than how different. Fundamentally, the process of making law is the same almost everywhere. It’s in the details that the differences lie. Yes, in California a resolution is a type of document while in the UK, a resolution is a type of section in a specific type of statutory instrument, but that’s a detail that doesn’t get in the way at all.

There is one point in this area that the white paper makes that is particularly annoying. XML is characterized as a generic model for representing data. That’s only half the story. Everybody in this field knows that there are two very different models that XML serves well – representing data and representing documents. XML, as a derivative from SGML, has stronger origins in the representation of documents than of data. So why are XML’s strengths in representing a document so casually ignored?  Seems a little self-serving to me. JSON, on the other hand is an excellent generic model for representing moderate amounts of data, but a terrible model for representing documents.

The entire argument here can now be refuted as we now have Akoma Ntoso. It’s an XML schema that was initially designed by the University of Bologna at the request of the United Nations. Today, it’s on the verge of becoming an OASIS standard. Akoma Ntoso understands that there is no on-size-fits-all solution to legislative XML. It does this by providing a basic set of constructs that are generally found everywhere, a mechanism to create custom constructs, and an overarching design for how to model the hierarchy of legislation.

The implementation I will be showing in Ravenna, Indianapolis, and Washington D.C. in the coming weeks will demonstrate just how a general-purpose XML model such as Akoma Ntoso for legislation can be applied to the specific needs of a pair of jurisdictions – and a pretty challenging pair of jurisdictions at that.

For me, XML is the easy winner.  With XML you design the document to exactly fit the needs of a jurisdiction and then shape the tools to work with this model. With a word processor, you shoe-horn the needs of the jurisdiction into the limited flexibility offered by a word processor’s intrinsic model and then spend all your time trying to handle the mismatches between what the word processor was designed to do and that the customer wants. Either way it’s challenging, but with a word processor, much more so.

Standard
Uncategorized

LegisPro edit will soon be ready for beta!

Our new rulemaking LegisProedit is coming along nicely. It’s a web-based XML drafting tool specifically designed for the rigors of rulemaking tasks such as legislative bill drafting. It supports both the Akoma Ntoso and the USLM legislative models and can be customized to support any other model if necessary.

This past week I gave a demonstration of it at the LEX US Summer School at George Mason University in Washington D.C. With trepidation, I allowed everyone to have a hands-on experience with it as I provided guidance. This was the first time the editor had been used by anyone outside of Xcential and the first time we had stressed server performance. While certainly not glitch free, the editor exceeded my expectations for this point in the development process and all went well. It worked!

This week we are talking about the editor at NCSL by way of a screenshot demo that I am sharing here:

The next opportunities to try the editor hands-on will be at the LEX Summer School in Ravenna, Italy next month and we will also be showing it later in the month at NALIT in Sacramento, California.

The QuickStarter beta program is still in the process of being finalized. We are currently envisioning different levels of participation, from basic beta testing to a full-fledged evaluation program for anyone looking to use it or a part of it in an upcoming project.

More information can be found at http://xcential.com/legispro or you can contact us at info@xcential.com.

Standard
Akoma Ntoso, HTML5, LegisPro Web, Standards, Track Changes, Transparency, Uncategorized, W3C

Legal Citations and XML Editing for Legislation

It’s been quite some time since my last blog post – almost six months. The reason is that I’ve been very busy. We are doing a lot of exciting development within Xcential. We are developing a number of quite challenging projects around the globe.

If you’ve been following my blog, you may remember that I was working on an HTML5-based XML editor. That development was two years ago now. We’ve come a long way since then. The basic editor has been stripped down, componentized, and has being rebuilt to be a far more robust, scalable, and adaptable solution. There are more details below, which I will expand upon as the editor rolls out over the next year.

    Legal Citations

It was almost a year ago since the last Legislative Data and Transparency Conference in Washington D.C. (The next one is coming up) At that time, I spoke about the need for improved citation management in published XML documents. Well, we’ve come a long way since then. Earlier this year a Technical Committee was formed within OASIS to begin developing some standards. The Legal Citation Markup Technical Committee is now hard at work defining markup models for legal citations. I am a member of that TC.

The reference management part of our HTML5-based editor has been separated out as a separate project – as a citation interpreter and reference resolver. In our development tests, it’s integrated with eXist as a local repository. We also source documents from external sources such as LII.

We now have a few citation management projects underway, using our resolver technology. These are exciting projects which will be a huge step forward in improving how citations are managed. It’s premature to talk about this in any detail, so I’ll just leave this as a teaser of stuff to come.

    XML Editing for Legislation

The OASIS Legal Document ML Technical Committee is getting ready to make a large announcement. While this progress is being made, at Xcential we’ve been hard at work refining the state-of-the-art in XML editing.

If you recall the HTML5-based editor for Akoma Ntoso from a couple of years back, you may remember that is was based around all the new HTML5 technologies that have recently been incorporated into web browsers. We learned a lot from that effort – both good and bad. While we were able to get a reasonable tagging editor, using facilities that made editing far easier, we still faced difficulties when it came to basic XML editing and scalability.

So, we’ve taken a more ambitious approach to produce a very generalized XML editing platform. Using what we learned as the basis, our new editor is far more capable. Rather than relying on the mapping of XML into an equivalent HTML5 structure, we now directly use the XML facilities that are built into the browser. This approach is both far more robust and far more scalable. But the most exciting aspect is change tracking. We’re building change tracking directly into the basic editing engine – from the outset. This means that we can track all changes – whether the changes are in the text or in the structure. With all browsers now correctly implementing the standardized DOM Range model, our change tracking model has to be very sophisticated. While it’s hellishly complex, my experience in implementing change tracking technologies over many years is really coming in handy.

If you’ve used change tracking in XMetaL, you know the limitations of their technology. XMetaL’s range selection constrains how you can select which limits the flexibility of deletion. This simplifies the problem for the XMetaL customizer, but at a serious usability price. It’s one of the biggest limiting factors of XMetaL. We’re dealing with this problem once and for all with our new approach – providing a great way to implement legislative redlining.

Redlining Take a look at the totally contrived example on the left. It’s admittedly not a real example, it comes from my stress testing of the change tracking facilities. But look at what it does. The red text is a complex deletion that spans elements with little regard to the structure. In our editor, this was done with a single “delete” operation. Try and do this with XMetaL – it takes many operations and is a real struggle – even with change tracking turned off. In fact, even Microsoft Word’s handling of this is less than satisfactory, especially in more recent versions. Behind the scenes, the editor is using the model, derived from the schema, to control this deletion process to ensure that a valid document is the result.

If you’re particularly familiar with XMetal, you will notice something else too. That deletion cuts through the structure of a table!!!! XMetaL can only track changes within the text of table cells, not the structure. We’re making great strides towards proper legislative redlining technologies, and we are excited to work with our partners and clients to put them into practice.

Standard
Uncategorized

Is it time to rethink how we are governed?

We have seen the worst of our government in the past few weeks. Our politicians have seemingly forgotten that their mission is to solve problems. Instead, they’ve regressed back to settling differences through tribal conflict. Isn’t that something that we should have put behind us centuries ago?

Why is it that our politicians can never solve complex problems?

I have always been fascinated with complex problem solving. It’s why I found myself a job at the Boeing Company at the start of my career. My job was to find ways to use computer automation to help Boeing solve ever more complex problems. While at Boeing, I was introduced to the discipline of systems engineering.

In the 1940′s, with the urgency of World War II as the impetus, large systems integrators like Boeing and AT&T had to find a way to eliminate the unpredictability of trial and error engineering. That way was systems engineering – which replaced the guessing game of early engineering efforts with a predictable engineering discipline that would allow new complex systems to be reliably brought online very fast.

The results speak for themselves. It’s that discipline in engineering that has given us the tremendous advances in aeronautics and electronics in the decades that have followed. Those supercomputers most people carry in their pockets would never have been possible were it not for the discipline of systems engineering.

Systems engineering imposes a rigorous problem-solving process. – Requirements are analyzed and quantified, alternatives are thoroughly studied, and the most optimal solution is selected. Emotions are wrung out of the process as soon as possible. When a problem is too large or appears insurmountable, it is broken down into smaller problems that are solved individually. Each step along the way and every decision is exhaustively documented and reviewed by peers. It’s a scalable process that allows any problem, no matter how complex or difficult, to be tackled with a good probability of success.

Of course, it’s not a perfect process. There are plenty of strong opinions, politicking, and sometimes even special interests to deal with. However, engineers are able to handle this as they are trained to work through their differences to find the best answers. Engineers are taught to detect and avoid the pitfalls of relying on opinions and ideology. Instead, they must relentlessly seek true and indisputable facts. Being able to do this effectively is a condition of employment. Engineers that can’t follow the process must be let go – businesses simply cannot afford to keep underperformers.

The problems that systems engineers must tackle are many times more complex than anything that our politicians will ever have to address. While the results are never perfect, and challenges abound, when a new plane makes its way out to the runway for that first flight, it’s a certainty that it will fly. The discipline of the process almost guarantees it.

Contrast this to the way our politicians solve problems. In the unlikely event that their metaphorical plane will ever find its way out to a runway, chances are it will come to an ugly end at the end of the runway crumpling into a pile of wishful thinking and intentional sabotage.

What’s the difference? Simply put, in systems engineering, opinions are suppressed and facts are emphasized while politicians seem to practice the exact opposite of this.

Why is it that we intuitively understand that the world’s most complex problems cannot be solved by people who rely on opinions and ideology, and yet that is exactly how we try to solve the world’s most important problems?

I am often asked what my vision is for legal informatics – the form of computer automation that targets legislative work. I’ve been pondering that question a lot over the past few weeks. Modern computing has revolutionized our lives. In the past twenty years alone, the way we interact with others, buy and sell products, keep ourselves entertained, and manage our lives has changed many times over thanks to computers and the Internet. Too often though, when I look at how we apply legal informatics, we’re simply computerizing outmoded nineteenth century processes – which, as we have seen in recent events, don’t work anymore.

I think it’s time that we rethink how we are governed – using the tools and technologies that have improved so many other aspects of our lives. Maybe then, we can have leaders who are problem solvers.

Standard