Akoma Ntoso, Standards, Transparency

Legislative Data: The Book

Last week, as I was boarding the train at Admiralty station in Hong Kong to head back to the office, I learned that I am writing a book. +Ari made the announcement on his blog. It seems that Ari has found the key to getting me to commit to something – put me in a situation where not doing it is no longer an option. Oh well…

Nonetheless, there are many good reasons why now is a good time to write a book. In the past year we have experienced a marked increase in interest in the subject of legislative data. I think that a number of factors are driving this. First, there is renewed interest in driving towards a worldwide standard – especially the work being done by the OASIS LegalDocumentML technical committee. Secondly, the push for greater transparency, especially in the USA, is driving governments to investigate opening up their databases to the outside world. Third, many first generation XML systems are now coming due for replacement or modernization.

I find myself in a somewhat fortuitous position of being able to view these developments from an excellent vantage point. From my base in San Diego, I get to work with and travel to legislatures around the world on a regular basis. This allows me to see the different ways people are solving the challenges of implementing modern legislative information managements systems. What I also see, is how many jurisdictions struggle to set aside obsolete paper-based models for how legislative data should be managed. In too many cases, the physical limitations of paper are used to define the criteria for how digital systems should work. Not only do these limitations hinder the implementation of modern designs, they also create barriers that will prevent fulfilling the expectations that come as people adapt to receiving their information online rather than by paper.

The purpose of our book will be to propose a vision for the future of legislative data. We will share some of our experiences around the world – focusing on the successes some legislatures have had as they’ve broken legacy models for how things must work. In some cases the changes involve simply better separating the physical limitations of the published form from the content and structure. In other cases, we’ll explain how different procedures and conventions can not only facilitate the legislative process, but also make it more open and transparent.

We hope that by producing a book on the subject, we can help clear the path for the development of a true industry to serve this somewhat laggard field. This will create the conditions that will allow a standard, such as Akoma Ntoso, to thrive which, in turn, will allow interchangeable products to be built to serve legislatures around the world. Achieving this goal will reduce the costs and the risks of implementing legislative information management systems and will allow the IT departments of legislatures to meet both the internal and external requirements being placed upon them.

Ari extended an open invitation to everyone to propose suggestions for topics for us to cover. We’ve already received a lot of good interest. Please keep your ideas coming.

Standard
Akoma Ntoso, HTML5, LegisPro Web, Standards, Transparency

2013 Legislative Data and Transparency Conference

Last week I participated in the 2013 Legislative and Transparency Conference put on by the U.S. House of Representatives in Washington D.C.

It was a one day event that featured numerous speakers both within the U.S. government and in the surrounding transparency community around D.C. My role, at the end of the day, was to speak as a panelist along with Josh Tauberer of GovTrack.us and Anne Washington of The George Washington University on Under-Digitized Legislative Data. It was a fun experience for me and allowed me to have a friendly debate with Josh on API’s versus bulk downloads of XML data. In the end, while we both fundamentally agree, he favors bulk downloads while I favor APIs. It’s a simple matter of how we use the data.

The morning sessions were all about the government reporting the progress they have made over the past year relating to their transparency initiatives. There has been substantial progress this year and this was evident in the various talks. Particularly exciting was the progress that the Library of Congress is making in developing the new congress.gov website. Eventually this website will expand to replace THOMAS entirely.

The afternoon sessions were kicked off by Gherardo Casini of the UN-DESA Global Centre for ICT in Parliament in Rome, Italy. He gave an overview of the progress, or lack thereof, of XML in various parliaments and legislatures around the world. He also gave a brief mention of the progress in the LegalDocumentML Technical Committee at OASIS which is working towards the standardization of Akoma Ntoso. I am a member of that technical committee.

The next panel was a good discussion on extending XML. The panelists were Eric Mill at the Sunlight Foundation who, among other things, talked about the HTML transformation work he has been exploring in recent weeks. I mentioned his efforts in my blog last week. Following him was Jim Harper at the Cato Institute. He talked about the Cato Institute’s Deepbills project. Finally, Daniel Bennett gave a talk on HTML and microdata. His interest in this subject was also mentioned in my blog last week.

One particularly fun aspect of the conference was walking into the entrance and noticing the Cato Institute’s Deepbills editor running on the table at the entrance. The reason it was fun for me is that their editor is actually a customization of an early version of the HTML5-based LegisPro Web editor which I have spent much of the past year developing. We have developed this editor to be an open and customizable platform for legislative editing. The Cato Project is one of four different implementations which now exist – two are Akoma Ntoso based and two are not. More news will come on this development in the not-too-distant future. I had not expected the Cato Institute to be demonstrating anything and it was quite a nice surprise to see software I had written up on the display.

If there was any recurring theme throughout the day, it was the call for better linked data. While there has been significant progress over the past year towards getting the data out there, now it is time to start linking it all together. Luckily for me, this was the topic I had chosen to focus on in my talk at the end of the day. It will be interesting to see the progress that is made towards this objective this time next year.

All in all, it was a very successful and productive day. I didn’t have a single moment to myself all day. There were so many interesting people to meet that I didn’t get a chance to chat with nearly as many as I would have liked to.

For an amusing yet still informative take on the conference, check out Ari Hershowitz’s Tabulaw blog. He reveals a little bit more about some of the many projects we have been up to over the past year.

https://cha.house.gov/2013-legislative-data-and-transparency-conference

Standard
Akoma Ntoso, HTML5, Standards, Transparency, W3C

XML, HTML, JSON – Choosing the Right Format for Legislative Text

I find I’m often talking about an information model and XML as if they’re the same thing. However, there is no reason to tie these two things together as one. Instead, we should look at the information model in terms of the information it represents and let the manner in which we express that information be a separate concern. In the last few weeks I have found myself discussing alternative forms of representing legislative information with three people – chatting with Eric Mill at the Sunlight Foundation about HTML microformats (look for a blog from him on this topic soon), Daniel Bennett regarding microdata, and Ari Hershowitz regarding JSON.

I thought I would try and open up a discussion on this topic by shedding some light on it. If we can strip away the discussion of the information model and instead focus on the representation, perhaps we can agree on which formats are better for which applications. Is a format a good storage format, a good transport format, a good analysis/programming format, or a good all-around format?

1) XML:

I’ll start with a simple example of a bill section using Akoma Ntoso:

<section xmlns="http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03" 
       id="{GUID}" evolvingId="s1">
    <num>§1.</num>
    <heading>Commencement </heading>
    <content> <p>This act will go into effect on 
       <date name=”effectiveDate” date="2013-01-01">January 1, 2013</date&gt;. 
    </p> </content>
</section> 

Of course, I am partial to XML. It’s a good all-around format. It’s clear, concise, and well supported. It works well as a good storage format, a good transport format, as well as being a good format of analysis and other uses. But it does bring with it a lot of complexity that is quite unnecessary for many uses.

2) HTML as Plain Text

For developers looking to parse out legislative text, plain text embedded in HTML using a <pre> element has long been the most useful format.

   <pre>
   §1. Commencement
   This act will go into effect on January 1, 2013.
   </pre>

It is a simple and flexible represenation. Even when an HTML represenation is provided that is more highly decorated, I have always invariably removed the decorations to leave behind this format.

However, in recent years, as governments open up their internal XML formats as part of their transparency intiatives, it’s becoming less necessary to write your own parsers. Still, raw text is a very useful base format.

3) HTML/HTML5 using microformats:

<div class="section" id="{GUID}" data-evolvingId="s1">
   <div>
      <span class="num">§1.</span> 
      <span class=”heading”>Commencement </span>
   </div>
   <div class="content"><p>This act will go into effect on 
   <time name="effectiveDate" datetime="2013-01-01">January 1, 2013 <time>. 
   </p></div>
</div>

As you can see, using HTML with microformats is a simple way of mapping XML into HTML. Currently, many legislative data sources that offer HTML content either offer bill text as plain text as I showed in the previous example or they decorate it in a way that masks much of the semantic meaning. This is largely because web developers are building the output to an appearance specification rather than to an information specification. The result is class names that better describe the appearance of the text than the underlying semantics. Using microformats preserves much of the semantic meaning through the use of the class attribute and other key attributes.

I personally think that using HTML with microformats is a good way to transport legislative data to consumers that don’t need the full capabilities of the XML representation and are more interested in presenting the data rather than analyzing or processing it. A simple transform could be used to take the stored XML and to then translate it into this form for delivery to a requestor seeking an easy-to-consume solution.

[Note: HTML5 now offers a <section> element as well as an <article> element. However, they’re not a perfect match to the legislative semantics of a section and an article so I prefer not to use them.]

4) HTML5 Microdata:

<div itemscope 
      itemtype="http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03#section" 
      itemid="urn:xcential:guid:{GUID}">
   <data itemprop="evolvingId" value="s1"/>
   <div>
      <span itemprop="num">§1.</span>
      <span itemprop="heading">Commencement </span>
   </div>
   <div itemprop="content"> <p>This act will go into effect on 
      <time itemprop="effectiveDate" time="2013-01-01">January 1, 2013 </time>.
   </p> </div>
</div>

Using microdata, we see more formalization of the annotation convention than microformats offers – which brings along additional complexity and requires some sort of naming authority which I can’t say I either really understand or see how it will happen. But it’s a more formalized approach and is part of the HTML5 umbrella. I doubt that microdata is a good way to represetn a full document. Rather, I see microdata better fitting in to the role of annotating specific parts of a document with metadata. Much like microformats, microdata is a good solution as a transport format to a consumer not interested in dealing with the full XML representation. The result is a format that is rich in semantic information and is also easily rendered to the user. However, it strikes me that the effort to more robustly handle namespaces only reinvents one of XMLs more confusing aspects, namely namespaces, in just a different way.

5) JSON

{
   "type": "http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03#section",
   "id": "{GUID}",
   "evolvindId": "s1",
    "num" : {
      "type": "http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03#num",
      "text": "§1."
   },
   "heading":  {
      "type": "http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03#heading",
      "text": "Commencement"
   },
   "content": {
      "type": "http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03#content",
      "text1": "This act will go into effect on "
      "date": {
         "type": "http://docs.oasis-open.org/legaldocml/ns/akn/3.0/CSD03#date",
         "date": "2013-01-01",
         "text": "January 1, 2013"
      }
      "text2": "."
   }
}

Quite obviously, JSON is great if you’re looking to easily load the information into your programmatic data structures and aren’t looking to present the information as-is to the user. This is a programmatic format primarily. Representing the full document in JSON might be overkill. Perhaps the role of JSON is for key parts of extracted metadata than the full document.

There are still other formats I could have brought up like RDFa, but I think my point has been made. There are many different ways of representing the same legislative model – each with its own strength and weaknesses. Different consumers have different needs. While XML is a good all-around format, it also brings with it some degree of sophistication and complexity that many information consumers simply don’t need to tackle. It should be possible, as a consumer, to specify the form of the information that most closely fits my need and have the legislative data source deliver it to me in that format.

[Note: In Akoma Ntoso, the format is called the “manifestation.” and is specified as part of the referencing specification.]

What do you think?

Standard
Akoma Ntoso, Standards, Transparency

Legal Reference Resolvers

After my last blog post I received a lot of feedback. Thanks to everyone who contacted me with questions and comments. After all the interest in the subject, I think I will devote a few more blog posts to the subject of legal references. It is quite possibly the most important subject that needs to be tackled anyway. (And yes, Harlan, I will try and blog more often.)

Many of the questions I received asked how I envision the resolver working. I thought I would dive into this aspect some more by defining the role of the resolver:

The role of a reference resolver is to receive a reference to a document or a fragment thereof and to do whatever it takes to resolve it, returning the requested data to the requestor.

That definition defines the role of a resolver in pretty broad terms. Let’s break the role down into some discrete functions:

  1. Simple Redirection – Perhaps the most basic service to provide will be that of a reference redirector. This service will convert a standardized virtual reference into a non-standard URL that is understood by a proprietary repository available elsewhere on the web that can supply the data for the request. The redirection service allows a legacy repository to provide access to documents following its own proprietary referencing mechanism without having to adopt the standard referencing nomenclature. In this case, the reference redirector will serve as a front to the legacy repository, mapping the standard references into non-standard ones.

  2. Reference Canonicalization – There are often a number of different ways in which a reference to a legal document can be composed. This is partly because the manner in which legal documents are typically structured sometimes encourages both a flat and a hierarchical view of the same data. For instance, one tends to think of section in a flat model because sections are usually sequentially numbered. Often however, those sections are arranged in a hierarchical structure which allows an alternate hierarchical model to also be valid. Another reason for alternate references is the simple fact that there are all sorts of different ways of abbreviating the same thing – and it is impossible to get everyone around the world to standardize on abbreviations. So “section1”, “sec1”, “s1”, and the even more exotic “§1” need to be treated synonymously. Also, let’s not forget about time. The requestor might be interested in the law as it existed on a particular date. The resulting reference will be formulated in a manner in which it starts being more of a document query rather than a document identifier. For instance, imagine a version of a section that became operational January 1, 2013. A request for the section that was in operation on February 1, 2013 will return that January 1 version if that version was still in operation on February 1 even though the operational date of the version is not February 1. (Akoma Ntoso calls the query case a virtual expression and differentiates it from the case where the date is part of the identifier)

    The canonicalization service will take any reference, perhaps vague or malformed, and will return one or more standardized references that precisely represent the documents that could be identified by the original reference – possibly along with a measure of confidence. I would imagine that official data services, providing authoritative legal documents, will most likely provide the canonicalization service.

  3. Repository Service – A legal library might provide both access to a document repository and an accompanying resolution service through which to access the repository. When this is the case, the resolver acts as an HTTP interface to the library, converting a virtual URL to an address of sorts in the document repository. This could simply involve converting the URL to a file path or it could involve something more exotic, requiring document extraction from a database or something similar.

    There are two separate use cases I can think of for the repository. The basic case is the repository as a read-only library. In this case, references are simply resolved, returning documents or fragments as requested. The second case is somewhat more complex and will exist within organizations tasked with developing legal resources – such as the organizations that draft legislation within the government. In this case, a more sophisticated read/write mechanism will require the resolver to work with technologies such as WebDAV which front for the database. This is a more advanced version of the solution we developed for use internally by the State of California.

  4. Resolver Routing – The most complex, and perhaps most difficult to achieve aspect, will be resolver routing. There is never going to exist a single resolver that can resolve every single legal reference in the world. There are simply too many jurisdictions to cover – in every country, state/province, county/parish, city/town, and every other body that produces legal documents. What if, instead, there was a way for resolvers to work together to return the document requested? While a resolver might handle some subset of all the references it receives on its own, for the cases it doesn’t know about, it might have some means to negotiate or pass on the request to other resolvers it knows about in order to return the requested data.

Not all resolvers will necessarily provide all the functions listed. How resolvers are discovered, how they reveal the functions they support, and how resolvers are tied together are all topics which will take efforts far larger than my simple blog to work out. But just imagine how many problems could be resolved if we could implement a resolving protocol that would allow legal references around the world to be resolved in a uniform way.

In my next blog, I’m going to return to the reference itself and take a look at the various different referencing mechanisms and services I have discovered in recent weeks. Some of the services implement some of the functions I have described above. I also want to discuss the difference between an absolute reference (including the domain name) and a relative reference (omitting the domain name) and why it is important that references stored in the document be relative.

Standard
Akoma Ntoso, HTML5, Standards, Transparency

XBRL in Legislation

Over the past few weeks, my posts about the HTML5 editor I have been working on have received a lot of attention. One aspect that I have noticed throughout has been people equating legislative markup to XBRL. In fact, I have taken to explaining Akoma Ntoso as being like XBRL for legislation. That helps people better understand what we are up to.

Or does it? It’s a little misleading to describe a legislative information model as being like XBRL for legislation. The reason is that many of the people interested in the transparency aspects of legislative XML are interested in it precisely because they’re interested in tracking budget appropriations. The assumption being made is that the XML will somehow make it possible to track the money that flows through legislation.

In some respects XML does help track money. Certainly, reporting budgets as XML is a whole lot better than the other approach you often see – tables reported as images. Images are a very effective way to hide appropriations from machine processing. However, that’s less and less of a problem. Nowadays, if you take a look at any budget appropriation embedded within legislation, you’re likely to find the numbers reported in a table. Most likely that table will be in the form of an HTML table at that. How you interpret that table is up to you. Perhaps the CSS class names for each cell will provide some guidance as to each cell’s content, but the information is being reported in a manner intended for human consumption rather than machine processing. In short, the manner in which financial information is reported in legislation is not for the sake of improving the transparency of the data.

In California, when we designed the budget amending facilities within the bill drafting system, our objective was to find a way to draft a budget amendment with the limited tabular capabilities of XMetaL. Whether the result was transparent or not was not a consideration as it was not a requirement six years ago. Victory for us was finding a way to get the immediate job done. Elsewhere, I have yet to see any legislative information model attempt to address the issue of making budget appropriations more transparent. Rather, the focus is instead on things like the temporal aspects of amending the law, the issues that arise in a multilingual society, or ensuring the accuracy and authenticity of the documents.

So what is the solution? I must profess to know very little about XBRL. What I do know tells me that it is not a suitable replacement for all those HTML tables that we tend to use. XBRL is a complex data format normalized for database consumption. The information is not stored in a manner that would allow for easy authoring by a human author or easy reading by a human reader. I did find one article from three years back that begins to address the issue. Certainly we’ve come far on the subject of legislative XML and it’s time to reconsider this subject.

The good news is that we do have a solution for integrating XBRL with legislative XML. Within an Akoma Ntoso document is a proprietary section found inside the metadata block. This section is set aside specifically to allow foreign information models to be embedded within the legislative document. So, much as the metadata already contains analysis sections for recording the temporal aspects of the legislation, it is quite easy to add an XBRL section to record the financial aspects of the legislation.

So the next question is whether or not XBRL is designed to be embedded within another document. It would seem that the answer is yes – and it is called inline XBRL. While the specification addresses fragments of XBRL within HTML documents, I don’t see why this cannot be extended to a legislative information model.  Simply put, a fragment of inline XBRL data would be embedded within the metadata block of the legislative document recorded in XML. This data would be available for any XBRL processor to discover (how is another question) and consume. The inline XBRL would be produced prior to publication by analyzing the legislative XML’s tables used to draft the document.

Ideally, the XBRL data would be hooked directly to the legislative XML data, much like spreadsheet formalas can be attached to data, but maybe I’m getting ahead of myself. Providing budget appropriation information within inline XBRL embedded within the legislative XML would be a great step forward – and it would achieve the objectives that people that are interested in the transparency aspects of legislative XML actually have.

I’m certainly no expert in XBRL, so I’m interested in hearing what people in the know have to say about this. Let me know. If you know of an appropriations taxonomy for XBRL, let me know. And if you’re interested in following how the DATA Act might influence this subject, check out the Data Transparency Coalition.

Standard
Akoma Ntoso, Hackathon, Transparency

Unhackathon Wrapup

Well, we had our “unhackathon” and it was, overall, a great success. We learned a lot, introduced a lot of people to the notion of XML markup and Akoma Ntoso, and made a number of important contacts all around. I hope that all the participants got something out of the experience. In San Francisco we were competing with a lovely Saturday afternoon and a street fair outside – which people chose to give up in order to attend our event.

At UC Hastings we had a special visit from State Senator Leland Yee of the 8th District which was most gratifying. He has been a strong proponent for government transparency and his surprise visit was awesome.

This was the first outing of the new AKN/Editor so I was quite nervous going in. Deciding to write an advanced editor, using brand new technologies, on all four browsers, and then planning an unmovable date just 10 weeks into the project is a little crazy now that I think about it. But the editor held up well for the most part. There were a number of glitches, but we were able to work around or fix most of them. The Akoma Ntoso model seemed to work overall, although there was a lot of the normal confusion over the hierarchical containment structure of XML. I did wish we could have made more progress with more jurisdictions and had been able to explore more of the redlining issues. But that was perhaps a bit too ambitious. I still want to find a venue to completely vet redlining as I believe that this is going to be the real challenge for Akoma Ntoso and I want to resolve them sooner rather than later.

On the entrepreneurial front, we did discover a potential market serving lonely males in the Middle East on Google Hangouts. We’ll leave that opportunity for someone else to exploit.

For me, trying to manage a room full of people, take care of editor issues, and keep in contact with the remotes sites and participants around the world was very overwhelming. If I missed your call or your question, please accept my apologies. My brain was simply overloading.

Going forward we are now starting to make plans for where to go from here. The LegalHacks.org website will remain intact and will even continue to develop. I’m going to refine the editor based on feedback and continue further development in the weeks and months to come. We hope that the site will continue to develop as a venue for legal informatics hacking. Also, preliminary work is now underway for a follow-on unhackathon in another part of the world. Look for an announcement soon!

Thank you to all the organizers – Charles Belle at UC Hastings, Pieter Gunst at Stanford Codex, Karen Sahuka in Denver (BillTrack 50). Thank you to Ari Hershowitz at Tabulaw for pulling it all together, Jim Harper from the Cato Institute for his opening words, Brad Chang of Xcential for being me at Stanford, Robert Richards for being our tireless word spreader, and a special thank you to Monica Palmirani and Fabio Vitali at the University of Bologna for participating from afar and for providing Monica’s videos.

Standard
Hackathon, Transparency

An Interesting Idea for how to Rethink How we are Governed

This past weekend I came across this interesting block by Abe Voelker. GitLaw: GitHub for Laws and Legal Documents – a Tourniquet for American Liberty. The reaction he got was amazing and totally overwhelmed him.

It’s an interesting read – written by someone naive enough to actually have hope that we could fix our government by applying modern technology. Of course, those of us in the know realize how totally absurd his ideas are. The legislative process is meant to be out of reach the public citizen. It’s meant to keep the common people from meddling in how laws are made. Those people would use normal everyday language that everybody could understand. Too many eyes might spot all the wasteful earmarks tucked in the back corners of the legislation. How could the public be trusted to do a good job legislating the laws of the land? Isn’t that why we have Congress, after all? As Abe points out, Congress has a 14% approval rating. Could the people actually compete with that? Could the people do a better job than what 14% of the population currently think is a good job?

Our legislative processes are arcane and convoluted processes that make it far too difficult for anyone without the resources it takes to participate. These processes were designed at a time when communication with the people was achieved by sending messengers out on horseback. Today, within an instant of me posting this blog, it can be read by almost anyone anywhere on this planet. So why do we continue to govern ourselves by creating such a deliberate distance between the people and their government? Do we really still need to divide opinions into two failed ideologies? Can’t modern technologies allow us to manage more than two diametrically opposing positions on every issue? With our governments on the verge of bankruptcy and with their approval ratings well below a flunking grade, why is there not more urgency in reinventing how we are governed?

The technologies that have emerged in the past few years will be disruptive to how we are governed. While Abe’s ideas are a bit simplistic, they must make us take a step back and ask if there isn’t a better way to govern ourselves. The real question is whether our leaders will lead us into this new era or will they be the fly in the ointment. With a 14% approval rating one maybe shouldn’t expect too much.

If you want to make a difference and learn more about how laws could be made more open and transparent, consider attending our upcoming “unhackathon” events this coming weekend. Go to http://legalhacks.org for more information.

Standard
Akoma Ntoso, Standards, Transparency

Imagine All 50 States Legislation in a Common Format

Last week I expressed dissapointment over NCSL’s opposition to the DATA Act (H.R. 2146). Their reasoning is that the burden this might create on the state’s systems will not be affordable. Contrast this with the topic of the international workshop held in Brussels last week – “Identifying benefits deriving from the adoption of XML-based chains for drafting legislation“. The push toward more transparent government need not be unaffordable.

With that in mind, stop for a while and imagine having the text from all 50 states legislation publishing in a common XML format. Seem like an impossibly difficult and expensive undertaking doesn’t it? With all the requirements gathering, getting systems to cooperate, and getting buy-in throughout the country, this could be another super-expensive project that in the end would fail. What would such a project cost? Millions and millions?

Well, remember again Henry Ford’s quote “If you think you can do a thing or think you can’t do a thing, you’re right”. Would you believe that a system to gather and publish all 50 states has recently been developed, in months rather than years, and on a shoe-string budget? That system is BillTrack50.com. It’s a 50 state bill tracking service. Check it out! We, at Xcential, helped them to do this herculian task by providing a simple and neutral XML and the software to do much of the processing. The press release is here. The format is SLIM, the same format the underlies my legix.info prototype. It’s a simple, easy-to-adopt XML format built on our past decade’s experience in legislative systems. Karen Sahuka at BillTrack50 recently gave a presentation on her product at the Non-profit Technology Conference in San Francisco.

SLIM is not as ambitious as Akoma Ntoso. If you take a gander at my legix.info site, you will see that it’s very easy to go from SLIM to Akoma Ntoso. In fact, going between any two formats is not all that difficult with modern transformation technology. It’s how we built the publishing system for the State of California as well. My point is that with the right attitude, a little innovation, and the right tools, achieving the modern requirements for accountability and transparency need not be out of reach.

Standard
Standards, Transparency

The State of the Art in Legislative Editors and the DATA Act

(My plan was for my next blog to contain a mini-tutorial on my editor, that is still coming this weekend)

A report on legislative editors has just been released in Europe. You can find the report at https://joinup.ec.europa.eu/elibrary/document/isa-leos-final-results. It’s a very interesting read. It’s focused on Europe but is something we should look at seriously in the US.

After almost a decade in this business, I discovered my European counterparts a couple years ago when I attended the LEX Summer School in Ravenna, Italy (Info on ths year’s class can be found here) What struck me was how much innovative work was occurring in Europe compared to the USA. Sure we have plenty of XML initiatives in the USA and there are many examples of modern up-to-day systems we can point to, but there is a lot of fragmentation and duplication of effort and learning. All in all, it is my feeling we’re falling far behind in this field. And yet the Europeans expect and want leadership from the USA; we’re the ones with a society more conducive to innovation and entrepreneurialism.

So how are we doing in the USA? This week the DATA Act (H.R. 2146) passed the U.S. House. It requires accountability and transparency in federal spending. Sounds like a good thing, doesn’t it? One does expect that the government we elect ultimately be accountable to we the people.

The DATA Act, while addressing federal spending, could be the impetus that drives state governments in America to update their systems to publish in open and transparent formats. Viewed as an opportunity, this act could ultimately drive better cooperation amongst the various state legislatures. This cooperation would improve innovation and progress in US legislative systems by focusing on common approaches and open standards. This less insular viewpoint would, as a result, improve efficiency and lower costs. Common standards allow common tools and common tools cost a lot less than full custom solutions. Check out the blog by Andrew Mandelbaum at NDI – http://www.demworks.org/blog/2012/04/how-xml-can-improve-transparency-and-workflows-legislatures.

Henry Ford once said “If you think you can do a thing or think you can’t do a thing, you’re right”. I was disapointed to see NCSL came out with their opposition to the DATA Act. Their reasoning is that the DATA Act is a cost they cannot afford at this point. Certainly, we are all feeling the effects of the economic meltdown in the past few years and it’s hurting the states especially hard. But why can’t the move to open and transparent systems be viewed as an opportunity to improve efficiency and reduce costs? If modern standards-based automation was a liability, would businesses have automated to the extent they have? I don’t see very much focus on using automation as a tool to improve efficiency in legal informatics. It’s an opportunity squandered I think.

If you want to know more about open legislative standards, consider attending our upcoming “unhackathon”. You can sign up here.

Standard