technology, Track Changes, Transparency, Uncategorized

Xcential in Litigation with Akin Gump Law Firm

As many of you know by now, Xcential and I have been placed in an unfortunate position of having to deal with litigation with a large Washington D.C. lobbying firm, Akin Gump. We have been getting a lot of press, and I feel it is my duty to explain the situation as best I can.

Back in late 2018, Xcential was approached by an attorney at Akin Gump interested in applying our bill drafting and amending application, LegisPro, to improve the process of drafting and amending federal legislation. Initially, he was interested in using LegisPro to generate a bill amendment. This eventually evolved into an investigation into whether LegisPro could generate a federal amending bill from an in-context marked up copy of the law itself. The Akin Gump attorney expressed frustration at having to type, in narrative format, the proposed changes to a federal bill, and sought a simpler solution.

Amending law in context was a use case for LegisPro’s amendment generating capabilities that we had long anticipated. I had even written a blog exploring the idea of amendments-in-context in 2013 which got a lot of coverage on Chris Dorobeck’s podcast and at govloop. I have long made a habit of reporting on the developments in the Legal informatics industry and the work I do at my Legix.info blog including this blog post from April 2018 which describes LegisPro’s feature set at that time.

First a little explanation about how LegisPro is configured to work. As there is no universal way to draft and amend legislation, we must build a custom document model that configures LegisPro for each jurisdiction we work with. Legislation, particularly at the federal level, is complex so this is a substantial task. The customer usually pays for this effort. For the federal government, we did have document models for parts of LegisPro, but they were specific to different use cases and belonged to the federal government, not Xcential. We were not entitled to use these configurations, or any part of them, outside the federal government. This means that, out-of-the-box, LegisPro was not tailored for federal legislation in a way that we could share with Akin Gump. We would have to build a new custom document model to configure LegisPro to draft federal legislation for them.

Once the attorney had had an opportunity to try out a trial version of LegisPro using an account we had provided, we had a meeting during May of 2019. To provide clarity to the conversation as our terminology and his were different, I introduced the terms amending-in-full, cut-and-bite amendments, and amendments-in-context to the Akin Gump attorney. This is the terminology we use in the legal informatics industry to describe these concepts. Seeing the attorney’s enthusiasm towards addressing the problem and being convinced this was a true sales opportunity, I said I would find the time to build a small proof of concept to show how LegisPro’s existing cut-and-bite amendment generator would be configured to generate federal style amendments. I had previously arranged to have a partial conversion of a part of the U.S. Code done by a contractor to support Akin Gump’s trial usage of LegisPro. In the months that followed and using this U.S. Code data set, I set about configuring LegisPro to the task, much of it on my own personal time. Akin Gump did not cover the cost of any of this including my time or the contractor’s fees.

I flew to Washington D.C. for an August 29th, 2019 meeting in the attorney’s office to deliver the demonstration of a working application in person. He was duly impressed and kept exclaiming “Holy S###” over and over. We explained that this was just a proof of concept and that we could build out a complete system using a custom document model for federal legislation. He had already explained that the cost for a custom document model was probably out of reach for Akin Gump, so we should consider the implementation as an Xcential product rather than a custom application for Akin Gump. We considered this approach and our offer to implement a solution was a very small percentage of what the real cost would be. We would need to find many more customers on K Street to cover the development cost of a non-government federal document model. He had explained this approach would earn Xcential a “K Street parade,” a term we used to describe the potential project of building a federal document model to be sold on K Street

As it turns out, even our modest offer of about $1,000 to $2,000 per seat (depending on various choices) and a range of $50,000 to $175,000 for a custom document model and other services was too much for Akin Gump, and they walked away. Our 2019 pricing sheet clearly mentioned that the per seat prices did not include the cost for document conversion or configuration/customization and that a custom document model might be required for an additional fee. Furthermore, we had discussed the need for this custom work on several occasions. Our offer was more than fair as building these systems typically runs into the millions of dollars. Despite considerable costs to us in terms of my time, the use of a contractor, and travel to Washington D.C., we were never under any obligation to deliver anything to Akin Gump, and Akin Gump never paid us anything.

In the process of configuring LegisPro to generate federal amending bills, I came up with some implementation changes to the core product which we felt were novel. They built on a mechanism we had already built for a different project and for which we had separately applied for a patent a year earlier. We went ahead and filed for a patent for those changes, describing the overall processing model using a term I coined called “bill synthesis,” echoing my experience with logic synthesis earlier in my career from which I had drawn inspiration.

Two and a half years later, we learned that Akin Gump had filed a complaint to assume ownership of our patent application. This made no sense at all as our patent application is very implementation specific to LegisPro’s inner workings. The Akin Gump attorney involved had played no part in the design or coding of these details. What he had done was to describe his frustrations with the federal bill drafting use case to us – something we had long been aware of. He was simply asking us for a solution to a problem that was widely known in the industry and previously known to us.

When I got to see the assertions that Akin Gump makes in its court filings, I was astonished by their breadth. Rather than technical details, the assertions are all high-level ideas, insisting that an idea for an innovation the attorney had conceived of in the summer of 2018 is the “proverbial ‘holy grail’” to the legislative drafting industry. However, this innovative idea, as described in the complaint, is an application that closely resembles LegisPro as he would have experienced it during his trial usage in early 2019, including descriptions of key services and user interface features that have long existed.

What does not get any mention in Akin Gump’s filings are the details of our implementation for which we have based our patent claims — around document assembly and change sets. This is not surprising. The attorney is not a software developer and Akin Gump is not a software firm, it is a law firm. Neither he nor the firm have any qualifications in the realm of complex software development.

I can appreciate the attorney’s enthusiasm. When I came across the subject back in 2001, I was so enthusiastic that I started a company around it.

But for me, this is very hurtful. I worked hard, much of it on my own time over summer to build the proof of concept for Akin Gump. Yet I am portrayed in a most unflattering way. While I recall a cordial working arrangement throughout the effort, that is not how Akin Gump’s complaint reads. There is no appreciation for the complexity of writing or even configuring software to draft and amend legislation. The attorney forgets to mention that I succeeded in demonstrating a working application in the form of proof of concept on August 29th in his office. There is no understanding of the deep expertise that I brought to the table. The value of the time that I had spent on this project already was likely worth more than the amount we asked from Akin Gump to deliver a solution.

I wake up many nights angry at what this has come to. We work hard to make our customers happy and to do so at a very fair price. We are a small company, based in San Diego, and having to defend ourselves against the accusations of a wealthy law firm is a costly and frustrating undertaking that distracts from our mission.

What is ironic is that, by taking a litigation route to claim a patent, Akin Gump has all but closed off any likelihood of ever having the capability. There are few software firms in the world capable of creating such a system. In the U.S. I only know of a couple, none of which have the experience and products which Xcential can bring to bear. For this solution to ever see the light of day at the federal level, it is going to take a substantial effort, with many trusted parties working collaboratively.

I once had a boss who would often say “when all you have is a hammer, everything looks like a nail.” Just because you have something in your toolbox, you should choose wisely if using it is the right course of action. For a law firm, litigation is an easy tool to reach for. But is it the right tool?

Standard
Uncategorized

Mapping Amending Language to Akoma Ntoso Modifications

In my last blog, I talked about Xcential’s long history working with change management as it applies to legislation and my personal history working in the subject in other fields.

In this blog, I’m going to focus in on change management as it is used in Akoma Ntoso. I’m going to use, as my example, a piece of legislation from the California Legislature. As I implemented the drafting system used in Sacramento (long before Akoma Ntoso), I have a bit of a unique ability to understand how change management is practiced there.

First of all, we need to introduce some Akoma Ntoso terminology. In Akoma Ntoso, a change is known as a modification. There are two primary types of modifcations:

  1. Active modifications — modifications in which one document makes to another document.
  2. Passive modifications — modifications being proposed within the same document.

The snippet I am using for example is a cropped section from AB17 from the current session:

In California, many changes are shown using what they call redlining — or you may know as track changes. However, it would be a mistake to interpret them literally as you would in a word processor — a bit of the reason why it’s difficult to apply a word processor to the task of managing legislative changes.

In the snippet above, there are a number of things going on. Obviously, Section 1 of AB17 is amending Section 1029 of the Government Code. Because California, like most U.S. states, only allows their codes or statutes to be amended-in-full. The entire section must be restated with the amended language in the text. This is a transparency measure to make it more clear exactly how the law is being changed. The U.S. Congress does not have this requirement and Federal laws may use the cut-and-bite approach where changes can be hidden in simple word modifications.

Another thing I can tell right away is that this is an amended bill — it is not the bill as it was introduced. I will explain how I can tell this in a bit.

From a markup standpoint, there are three types of changes in this document. Only two of these three types are handled by Akoma Ntoso:

  1. As I already stated, this bill is amending the Government Code by replacing Section 1029 with new wording. This is an active change in Akoma Ntoso of type insertion.
  2. Less obvious, but Section 1 of AB17 is an addition to the bill as originally introduced. I can tell this because the first line of Section 1, known in California as the action line, is shown in italic (and in blue which is a convention I introduced). The oddity here is that while the section number and the action line are shown as an insertion, the quoted structure (an Akoma Ntoso term), is not shown as inserted. The addition of this section to the original bill is a passive change of type insertion.
  3. Within the text of the new proposed wording for Section 1029, you can also see various insertions and deletions. Here, you have to be very careful in interpreting the changes being shown. Because this is the first appearance of this amending section in a version of AB17, the insertions and deletions shown reflect proposed changes to the current wording of Section 1029. In this case, these changes are informational and are neither an active nor passive change. Had these changes been shown in a section of the bill that had already appeared in a previous version of AB17, these these changes would be showing proposed changes to the wording in the bill (not necessarily to the law) and they would be considered to be passive changes.

The rules are even more complex. Had section 1 been adding a section to the Government Code, then the quoted text being added would be shown as an insertion (but only in the first version of the bill that showed the addition). Even more complex, had the Section 1 been repealing a section of the Government Code, then the quoted text being repealed would be shown as a deletion (and would be omitted from subsequent versions of the bill). This last case is particularly confusing to the uninitiated because the passive modification of type insert is adding an active modification of type repeal. The redlining shows the insertion as an italic insertion of the action line while the repeal is being shown as a stricken deletion of the quoted structure.

The lesson here is that track changes, as we may have learned them in a word processor, aren’t as literal as they are in a word processor. There is a lot of subtle meaning encoded into the representation of changes shown in the document. Being able to control track changes in very complex ways is one of the challenges of building a system for managing legislative changes.

Standard
Uncategorized

Xcential is a Change Management Company

At Xcential, we typically describe ourselves as a legislative technology company. While that is correct, the true answer is more nuanced than that. We purposefully don’t solve problems that are mainstream and relatively easily solved by other off-the-shelf software. Instead, we say that we focus on drafting but, in saying that, we understate what we do. In practice, we focus on a very complex and high-value problem called change management — as it relates to legislation. Few people truly know how to solve this problem.

Twenty years ago, the founders of Xcential worked at an XML database company that was a subsidiary of Xerox. We started Xcential because we thought the legislation was one of the best applications for XML we had ever come across. It was the change management aspects that fascinated me, in particular. While my knowledge of legislation was based on high school civics class, I had a lot of experience in the field of change management.

At the start of my career, I was an electronics design engineer at the Boeing Company. While there, I worked on a very sophisticated form of change management — concurrent fault simulation of behavioral representations of electronic systems. Fault simulation is a deliciously complex differencing problem. In legislation, we think of changes as amendments to the text and we record them as insertions and deletions. In fault simulation, the changes aren’t textual, they are behavioral. We record those changes as observable differences from expected results in something called a fault dictionary. With this dictionary of simulated faults, you are able to backtrack to predict which likely faults are causing the problem.

While managing amendments and managing faults in an electronic system might seem a world apart, algorithmically they are surprising similar. In an amended bill, the objective is to efficiently record changes to a document as deltas (differences) recorded inline within the original text. When simulating an electronic system, the objective is to record thousands of potential failures as shadow circuits (differences) against a single good simulation executing concurrently. The shadow circuits, while a dynamic part of a simulation run, are very analogous to the changes recorded in a document. It’s a very clever techniques for efficiently simulating the behavior of thousands of things that might go without having to run thousands of individual simulations.

Getting my head around the complexities of concurrent fault simulation taught me how to think in a world of asynchronous recursion — electronic systems are inherently asynchronous. Complex recursion in legislative documents is something I must frequently wrestle with, from parsing and responding to complex requests for documents or parts of documents in the URL Resolver to managing the layers of sets of changes that exist in the U.S. Code as laws are amended.

Change management has a lot of applications — not just in managing faults in an electronic circuit or amendments in legislation. Another project at Boeing that I was not directly involved with involved allowing every airliner coming off the assembly line to have it’s own unique document configuration that would evolve through the thirty or so years the aircraft was in service. So many possibilities…

Standard
Uncategorized

Legislative Terminology — The Same but Different

In my last blog, I covered a lot of the variations I find around the world. I do a lot of document analysis, working to map various legislative traditions into Akoma Ntoso. Doing the job right sometime means understanding nuances and resisting the temptation to apply rules learned elsewhere.

There are a number of terms that often require very careful consideration:

  • In legislation in the English speaking world, the “middle” layer is usually the Section. Numbering is sequential starting at the beginning of the document and continuing to the end of the document regardless of the hierarchy above. In non-English speaking countries, this level is the Article and the Section is a upper level like a Part or Chapter.

    However, there are exceptions. In the US Constitution, this practice is not followed. In the US Constitution, sections are found in articles. This arrangement is the opposite way around to European legislation where articles are found in sections. This doesn’t really make a lot of sense. In a newspaper, articles are found in sections of the paper like the business or sports section. This same structure exists in HTML5. Perhaps Thomas Jefferson and the other framer’s of the US Constitution were trying to add a bit of European flair to their work, but got the order backwards. Many Constitutions around the world are modelled on the US Constitution and adopt the same unusual Article/Section arrangement.

    One quirk I came across lately was most confusing and presented an interesting conundrum. While the prevailing practices in the jurisdiction were British in tradition, a few statutes adopted a more European style. The sections were numbered sequentially and always referred to as sections. However, the numbering never explicitly calls out the level type (e.g. the section number is “2.” rather than “Sec 2.”) Nonetheless, knowing that this level is a Section, we had modelled the sections as akn:section. However, we then discovered a small handful of statutes that had upper level sections as found in European legislation (e.g. SECTION 3). So, in these documents, there were two complete difference types of constructs both called sections. While this was probably an error caused by drafting rules not being enforced properly, the result was enacted law containing this error. We ended up using an akn:hcontainer with a @name = section to create another distinct type of Section.
  • One common area of confusion is the use of plurals. We see this all over the place. For example, in some jurisdictions, the Section type construct is known as a Regulation and the document containing them is called Regulations. Other jurisdictions refer to the sections as Section, and the document itself is the Regulation.

    This same practice is found with rules, but in that case, the section type construct is called a Rule and the document is known as Rules. In this case, this naming practice is nearly universal.

    We find this same inconsistency with Bill Amendments. In some jurisdiction, each individual change is referred to as an Amendment and the collective whole are Amendments or an Amendment List. In other jurisdictions the individual changes are known as Instructions and the collective whole is the Amendment. This difference can be confusing when mapping to Akoma Ntoso as that schema implies the former convention as this is more common in Europe while the latter approach is more prevalent in the U.S.
  • Another area of confusion is the difference between an Annex and a Schedule. The European concept of an Annex is separate document treated somewhat as an attachment to the base document. However, a Schedule is different — it clearly a part of the Body of the document. While it is most often found at the end of the body of the document, in some jurisdictions which complex hierarchical structures, schedules can also be found at the end of any upper hierarchical level. This construct is one that cannot currently be modelled in Akoma Ntoso without resorting to akn:hcontainer although the proposed next version includes akn:schedule to rectify this.

Mapping a jurisdiction’s legislation into Akoma Ntoso can be tricky. The mapping isn’t always straightforward and almost always an exhaustive analysis of the entire body of existing laws will reveal that there are no hard and fast rules. As existing law can’t just be “fixed” to be consistent, it is often necessary to come up with creative ways to handles the oddities that are found.

Standard
Process, technology, Uncategorized

GitHub Copilot — Is it the future?

Several months ago, I got admitted to the GitHub Copilot preview. For those of you who don’t know what Copilot is, it’s a AI-based plugin to Visual Studio Code that helps you by suggesting code for you to type. If you like, the suggestion, you hit tab, and on you go.

Join the GitHub Copilot waitlist · GitHub

It may sound like magic, and in some ways, it does seem like that. Apparently, it learns the vast base of open-source code found in the GitHub repositories. This, of course, has led to the inevitable charges that it violates fair use of that code and even that it will ultimately replace developer’s jobs much as factory automation has replaced workers. From my experience, this is more about sensationalism than anything real to worry about.

In my recent posts, I’ve covered the DIKW pyramid. It seems we’ve been stuck in the information layer for a long time, only barely touching the knowledge layer in very rudimentary ways. Yes, there are tools like Siri and Alexa which claim to be AI-based virtual assistants, but they just feel like a whole bunch or complicated programming to achieve something that is more annoying than helpful. There is Tesla Copilot for self-driving cars, but that just seems scary to me. (Full disclosure: I don’t even trust cruise control) To me, GitHub copilot is the first piece of software that truly seems to drive deep into the knowledge layer and even reach the wisdom layer. It’s truly simulating some sort of real smartness.

While the sensationalists love to make it seem that Copilot is lifting code from other people’s work and offering it up as a suggestion, I’ve seen nothing whatsoever that suggests that that is what it is doing. Instead, it truly seems to understand what I am doing. It makes suggestions that could only come from my code. It uses my naming conventions, coding standards, and even my coding style. It seems to have analyzed enough of the code base in my application to understand what local functions and libraries it could draw upon. The code it synthesizes are obviously built on templates that it’s derived by learning. But those templates aren’t just copies of other people’s work. This is how synthesis works in the CAD world I come from (actually, it’s a bit more sophisticated that the synthesis I knew in CAD many years ago) and this is a natural next step in coding technologies.

I’ve been experimenting with what Copilot can do — how far reaching its learning seems to be. It’s able to help me writing JavaScript. What it is able to suggest is remarkable. However, coding assistance is not its only trick. It even helps with writing comments — sometimes with a bit of an attitude too. Last week I was adding a TODO: comment into the loader part of LegisPro to note that it needed to be modernized. Copilot’s unsolicited suggestion for my comment was “Replace the loader with a real loader”. Thanks Copilot. As Han Solo once said, “I’m not really interested in your opinion 3PO”.

Of course, this all leads to the inevitable question. Can it be trained to write legislation? Much to my surprise, it seemingly can. How and why it knows this is completely unknown to me. It’s able to suggest basic amending language and seems to know enough that it can use fragments of quotes from Thomas Jefferson and Benjamin Franklin. I find it incredible that it can even understand the context of legislation and that I did not have to tell it what that context was.

So am I sold on this new technology? Well, yes and no.

It’s not the scary source code stealing and eavesdropping application some would make it out to be. The biggest drawback to it is the same reason I don’t even trust cruise control in my car. It’s not that I don’t trust the computer. It’s that I don’t trust myself to not become lazy and complacent and come to believe the computer is right. I’ve already come across a number of situations where I’ve accepted Copilot’s suggestion without too much thought, only to needlessly wasting hours tracking down a problem that would never have existed if I had actually taken the time to write the code.

It’s an interesting technology, and I believe it’s going to be am important part of how software development evolves in the coming years. But as with all new technologies, it must be adopted with caution.

Standard
Uncategorized

Comparing DOCX to Akoma Ntoso for Legislation

After describing what makes for good legislative XML, I feel I should bring up a favorite topic of mine — why word processors don’t make for good legislative drafting tools.

Lately, we’ve been implementing round tripping tools to allow Akoma Ntoso documents to be imported and exported from Microsoft Word. This is to facilitate migration from a largely office productivity-oriented system to an XML-based one and to allow the exchange of documents with external clients that don’t have access to the internal systems being used to draft and manage legislation. It’s been quite a difficult process. The round-tripping itself has been quite straight forward. Exporting a document is relatively easy and reimporting that exported document, unchanged, isn’t difficult. What is very problematic is trying to ingest documents drafted or extensively edited using a word processor. The DOCX markup quickly becomes a tangled mess. Even when a document looks fine visually, there can be a lot going wrong on the inside, revealing the drafter’s struggle with the word processor to get a document that at least looks right. To avoid the problematic mess, we tend to resort to interpreting the words and discarding the structure and internal metadata entirely. It’s not perfect, but it’s at least manageable.

I’m going to compare the prominent word processing format today, DOCX (well, at least the WordprocessingML part of it) to Akoma Ntoso in respect to how they stack up to each other on my list:

  • Is it semantic?
    DOCX: No, not at all. DOCX is a serialization of the inner workings of Microsoft Word. It makes no attempt to be anything else.
    Akoma Ntoso: Yes, this is the fundamental approach Akoma Ntoso takes.
  • Is the presentation separated from the semantics as much as possible?
    DOCX: No, the presentation is tied directly into the document itself, and what’s more, is very proprietary.
    Akoma Ntoso: Yes, although you can apply presentation directly inline in cases, such as tables, where necessary.
  • Is all the text (excluding any metadata section) in the natural reading order?
    DOCX: Yes, for the most part.
    Akoma Ntoso: Yes, for the most part.
  • Does it, to the fullest extent possible, avoid the use of generated text?
    DOCX: No, and this is one of the most frustrating and infuriating parts of working with DOCX.
    Akoma Ntoso: Mostly, but it doesn’t preclude practices that ensure this rule is followed.
  • Is every provision that needs data associated with it permanently identifiable?
    DOCX: Mostly.
    Akoma Ntoso: Yes, via the @wId or the @GUID attributes.
  • Is every provision that is referred to easily locatable?
    DOCX: Not without extensive customization.
    Akoma Ntoso: Yes, via a standardized locator mechanism using the @eId/@wId attributes.
  • If the XML schema is for general use, is there an extensible way to add missing constructs?
    DOCX: No, unless you regard styling as your constructs (a bad idea) or want a complex customization task.
    Akoma Ntoso: Yes, via the seven elements found in the generic model.
  • Is there an extensible metadata mechanism?
    DOCX: Yes, but it’s complicated.
    Akoma Ntoso: Yes, but it’s complicated.
  • Does it provide the facilities necessary to automate according to modern expectations?
    DOCX: No, the presentation oriented structure of DOCX does little to enable downstream automation.
    Akoma Ntoso: Yes, Akoma Ntoso encourages a hierarchical content structure that is ideal for downstream automation.

Of course, Akoma Ntoso looks a lot better for legislative documents than does DOCX files. That should be no surprise — Akoma Ntoso is purpose-built for legislation while DOCX is a general purpose document model intended for no single purpose. But it is also fundamentally very different. While Akoma Ntoso is designed to be in modern standards-based document information model for legislation, DOCX is a serialization of the archaic data structures that exist within Microsoft Word. DOCX reflects the proprietary inner workings of Microsoft Word rather than the semantic meanings to be found within a document.

Akoma Ntoso has its drawbacks too. It’s complex, a bit academic, and has to span a very broad range of legal traditions make it a good fit for most legislative traditions, but a perfect fit for none.

Standard
Akoma Ntoso, Standards, technology, Uncategorized

What is Good Legislative XML?

I’m often asked what make on XML model better than another when it comes to representing laws and regulations. Just because a document is modeled in XML does not mean that it is useful in that form — the design of the schema matters in terms of what it enables or facilitates.

We have a few rules of thumb that we apply when either designing or adopting an XML schema:

  • Is it semantic?
    Reason: In order to process the information in a document, you have to understand what it is and what it means.
  • Is the presentation separated from the semantics as much as possible?
    Reason: We have moved beyond paper and nowadays it’s important to present information in form factors that just don’t suit the legacy constraints imposed by printing paper.
  • Is all the text (excluding any metadata section) in the natural reading order?
    Reason: The simplest way to present and process the text in a document is in the reading order of the text. This is particularly important is the presentation is to be added to the XML using simple CSS styling (as opposed to HTML transformation) and when the text is subject to complex amending instructions.
  • Does it, to the fullest extent possible, avoid the use of generated text?
    Reason: Similar to the last rule, it’s important for text to be displayed or amended when that text is represented. Generating text opens up a can of worms which can require sophisticated additional processing. Also, from a historical record of the text, which is essential for enacted law, having part of the text be generated by an external algorithm requires that the algorithm itself become part of the permanent record.
  • Is every provision that needs data associated with it permanently identifiable?
    Reason: With modern automation comes the need to not only manage the text of a provision but also state information. For example, is the current status of the provision pending, effective, repealed, or spent? While some of the metadata might be stored with the XML representation of the provision itself, sometimes it is better to store that metadata in a separate part of the document or in an external database. In these cases, it’s important to be able to permanently associate this external metadata with the provision — and this usually requires an immutable (permanent) identifier.
  • Is every provision that is referred to easily locatable?
    Reason: Laws are full of references (or citations). These are to provisions within the same document or to other documents or provisions within those documents. There needs to be a way to accurately and efficiently traverse and process these references. This need usually requires a locating identifier that an unambiguously identify the provision being referred to.
  • If the XML schema is for general use, is there an extensible way to add missing constructs?
    Reason: It is easy to claim to support all the legal traditions in the word, but extremely difficult to do so. While legal traditions are remarkably similar around the world, it’s impossible to predict every single construct that will arise — especially with documents data back hundreds of years. There has to be a way to implement constructs that don’t intrinsically exist within the base XML schema.
  • Is there an extensible metadata mechanism?
    Reason: A primary objective for representing a legislative or regulatory document in XML is for the processability it enables. This invariably means a need to record extensive metadata about the provisions found within the document. As the automation possibilities are endless, there needs to be a way to model and record the metadata that is generated.
  • Does it provide the facilities necessary to automate according to modern expectations?
    Reason: Some structure facilitate automation while others do not. For instance, flat structures can simplify the drafting process, but also make the automation process more difficult. It’s usually better to implement hierarchical structures and then hide the drafting complexity that creates with richer tools.

Standard
Uncategorized

Twenty Years in Legal Informatics!

Today marks my twentieth year in the field of legal informatics. It was January 4th, 2002 that we officially started Xcential. The following week, Brad and I flew up to Sacramento to start our new project to replace California’s aging mainframe system with a modern XML-based drafting system. At the time, with a background in CAD automation, I was relying on what I remembered from high school civics class in high school as my understanding of the field. We’ve come a long way in those twenty years.

When we arrived in Sacramento, our charter was to work closely with the Legislative Data Center to produce a legislative drafting, amending, and publishing solution. The accompanying workflow system would be developed in-house and the database-oriented history system was to be developed by another vendor. There were a few constraints — the system had to be XML-based, the middle tier had to be Enterprise JavaBeans (EJB) and use WebLogic, and the database had to be Oracle. This last constraint had been decided somewhat mysteriously by upper management in the wake of 9/11 and left us scrambling to figure out how XML and an SQL-based relational database would work together. Fortunately, we learned that Oracle was developing XDB and they were open to using us as a guinea pig, for better or worse.

At the time we didn’t realize it, but we were the replacement for an unsuccessful attempt to build a drafting system using Microsoft Word. Somewhat strangely, while that project was wrapping up the same month we were starting, we never got any wind of that project’s existence and, to this day, I’ve never heard anyone ever mention anything about that project in Sacramento. The only hint we got was that we were expressly forbidden from suggesting Microsoft Word as the drafting tool. It was only when we came across the owner of the company that had performed that project at a conference and he bitterly suggested our project would meet the same fate as his, that we realized the project had existed at all. Thankfully, he was wrong and we deployed our solution in late 2004 for the 2005-2006 session. It’s been in use ever since.

So what has changed in the twenty years I’ve been in this field. Well, a lot has changed — and a lot has not. In my last two blogs I’ve discussed the DIKW pyramid and written about how it should be expected that migration through the layers can be expected to take between ten and twenty years.

When we started in 2002, the majority of jurisdictions were still mired in the tail end of the “data” era — having data entry to enter documents into mainframe systems. Other than that, there was little automation. A number of jurisdictions were starting to move forward into the “information” era. There were two distinctly different approaches being taken. Many jurisdiction, as California had done before us, were taking a half-step into the new era using office productivity tools. The reason I consider this a half-step is because, while clearly a more modern approach than data entry into a mainframe, the step did little to prepare for the steps to come — being able to add layers of automation to increase the speed, volume, and efficiency of processing legislation. This was the lesson California had learned with their earlier project, and others have learned since — that without a robust semantic information model, you just can’t build robust automation tools. Many jurisdictions did understand this and were working towards a full step using XML-based tools. Although XML tools at the time were decidedly first generation, the benefits that automation promised outweighed the risks of being an early adopter.

So where are we today? While twenty years ago, most jurisdictions were at the end of the “data” era and start of the “information” era, there has been considerable, if slow, progress. Today most jurisdictions are either somewhere between the midpoint of the “information” era (mostly the office productivity approach) and into the early stages of the “knowledge” era (with the XML approach). Many of the systems deployed in the mid-2000s are now starting to age out and jurisdictions are looking to replace them with systems that can meet the modern demands of the 2020s.

As for Xcential, over the last few years we’ve been progressing from a consulting company to a product company — where we rely on third-party integrators to do implementations. This way we can leverage our 20 years of experience far more effectively. We still do our own implementations, when it makes sense, but we now offer LegisPro as a product that can be implemented by one of our partner companies, by a local integrator, or even by a jurisdiction’s own internal development team. Xcential today is very different from what it was 20 years ago, and our growth over the past year or so has been amazing — and for me quite exhausting.

It will be interesting to see where we are in another twenty years — although I may have retired by then. (most people roll their eyes at this point suggesting they think I’ll never want to retire)

Standard
Process, technology, Uncategorized

The Knowledge Pyramid

At the very start of my career at the Boeing Company, my boss Jerry introduced the Knowledge Pyramid the DIKW Pyramid to me one evening. I had an insatiable thirst for learning and he would spend hours introducing me to ideas he thought I could benefit from. To me, this was a profound bit of learning that would somewhat shape my career.

At the time, I was working in CAD support, introducing automation technologies to the various engineering project’s around the Boeing Aerospace division. The new CAD tools were running on expensive engineering workstations and were replacing largely homegrown minicomputer software from the 1970’s.

Jerry explained to me that the legacy software, largely batch tools, that crunched data manually input from drawings represented the data layer. The CAD drawings our tools produced actually represented a digital representation of the designs with sufficient information for both detailed analysis and manufacturing. It would take a generation of new technologies to advance from one layer to the next in the DIKW pyramid — with each generation lasting from ten to twenty years. His interest was in accelerating that pace and so we studied, as part of our R&D budget, artificial intelligence, expert systems, language-based design techniques, and design synthesis.

While data was all about crunching numbers, information was all about understanding the meaning of the data. Knowledge came from being able to use the information to synthesize (Jerry’s favorite topic) new information and to gain understanding. And finally, wisdom came from being able to work predictively based on that understanding.

When I was introduced to legal informatics in the year 2000, it was a bit of a time warp to me. While the CAD world had advanced considerably and even design synthesis was now the norm, legal informatics was stuck in neutral in the data processing world of the late 1970s and early 1980s. Mainframe tools, green screen editors, and data entry was still the norm. It was seeing this that gave me the impetus to work to advance the legal field. The journey I had just taken in the CAD world of the prior 15 years was yet to be taken in the legal field. The transition into information processing was to start with the migration to XML — replacing the crude formatting oriented markup used in the mainframe tools with modern semantic markup that provided for a much better understanding of the meaning of the text.

To say the migration to the future has gone slowly would be an understatement. There are many reasons why this has happened:

  • The legacy base of laws have to be carried along — unchanged in virtually every way. This would be like asking Boeing to advance their design tools while at the same time requiring that every other aircraft design ever produced by the company in the prior century also be supported. For law, it a necessary constraint, but also a tremendous burden.
  • The processes of law are bound by hard-to-change traditions, sometimes enshrined by the constitution of that jurisdiction. This means the tools must adapt more to the existing process than the process can adapt to the tools. Not only does this constraint require incredibly adaptable tools, it is very costly and dampers the progress that can be made.
  • The legal profession, by and large, is not technology driven and their is little vision into what can be. The pressure to keep things as they are is very strong. In the commercial world, companies simply have to advance or they won’t be competitive and will die. Jurisdictions aren’t in competition with one another and so the need to change is somewhat absent.

For advancements to come their needs to be pressure to change. Some of this does come naturally — the hardware the old tools run on won’t last forever. New legislators entering into their political careers will quickly be frustrated by the archaic paper-inspired approach to automation they find. For instance, viewing a PDF on a smartphone is not the best user experience. It is that smartphone generation that will drive the need to change.

Over the next few blogs, I’m going to explore where legal informatics is on the DIKW pyramid and what advancements on the horizon will move us up to higher levels. I’ll also take a look at new software technologies that point the way to the future — for better or worse.

Standard
Uncategorized

I’m Back!!!

After a long hiatus away from my blog, I decided to reinstate it and get back to regular blogging.

There are going to be a few changes. While the subject stays the same, I’m returning this blog to its original intent — a personal blog about the technologies, tools, processes I encounter and the many events I participate it. It’s going to be less about Xcential and LegisPro and more about my experiences in the field of legislative technology.

It’s not that Xcential and LegisPro don’t remain an important part of my life — they remain the central focus. However, as my blog started to become more of a marketing tool and less of a personal blog, my interest started to wane.

Another change is that I’m going to focus on simpler more frequent blogs. They will cover a range of topics:

  • Observations and discoveries about legislative technologies
  • Experiences implementing Akoma Ntoso and other XML document models around the world
  • Modern technologies I learn and apply as part of my job
  • Software development processes and practices
  • Software tools and platforms
  • Events relating to legislative technology

If there is something you think I should cover, let me know in the comments.

Standard