Akoma Ntoso, Process, Standards, technology, Transparency

Changing the way the world legislates. Together.

I got my start, like my father before me, as a draftsman. I worked for the Cleveland Switchboard Company drawing power distribution switchboards and panel boards while earning my engineering degree at university. That experience led to me getting a job at the Boeing Company supporting drafting tools as an electronics design engineer.

It was a pivotal time at Boeing, and indeed for the entire industry. When I started, we were supporting drafting tools which used computers to emulate drafting boards. But within a few years, drafting tools gave way to design tools. The reason was simple. Computers enabled a whole different way of thinking. It wasn’t just a matter of putting pencil to paper anymore. Design tools allowed you to use a computer to do the intellectual engineering work behind the schematics that would be produced. My focus shifted from schematic drawings to fault simulations – being able to identify the many ways a system might fail such that those failure modes would never happen (a skill Boeing seems to have lost of late).

The pace of change in legal informatics has been much slower — something that has been frustrating to me. The transformation I witnessed in electronic design today allows me to have a virtual supercomputer in my pocket. It’s time for legislation to make the same leap into the future.

We need to get beyond legislative drafting and rethink the process by which policy initiatives become legislation which ultimately become law. There’s much more to that process than just writing the text for the documents that become bills.

There are many stakeholders in the legislative process – constituents, special interests, politicians, government representatives, and even the lawyers that draft the legislation. Synthesizing all the information they provide into a bill draft is a complex process. This is something I’ve been thinking about for a very long time. In fact, it was realizing that XML was the best medium for transforming the legislative process, in a way like what I had experienced in electronics design, that led me to start Xcential.

We now have the tools, technologies, and the standards to make this transformation a reality. We have tools that allow the lawyers who draft legislation to focus on the intellectual process rather than the technical process of getting the formatting or wording right. We have tools that allow all the stakeholders to work together rather than individually. And we have technologies and standards that ensure the precise representation and handling of the law.

When I left Boeing, I took a job working at a design automation company to revolutionize how engineers work – something we called “concurrent design”. I was the product manager for the design management component at the very heart of that system. The company tagline reflected the vision: “Changing the way the world designs. Together.” It’s time for a such big vision in our field.

Standard
Lawsuit, Process, technology, Transparency

Transparent legislation should be easy to read — part II

I have some good news to share. After almost two years under the cloud of litigation regarding a challenge to one of our patent applications, we have reached a settlement that concludes the issue. The Patent Trials and Appeals Board (PTAB) ruled in our favor by denying the patent derivation claim made against us. This was on top of earlier rulings in our favor. What is more, both our patent applications have now been allowed.

While the terms of the settlement remain confidential, this has been a costly exercise for us. For me personally, this was very difficult. Not only did I have to defend my honor and integrity, but I have had to spend half of my personal life savings in the defense of Xcential – with no guarantees that I will ever be able to recoup that expense. Using my own savings for a lot of the legal bills was the only way to ensure that Xcential would be able to go on. This has certainly affected my life.

If there is something good to come out of this exercise, it is validation that we’re onto something very valuable.  With the litigation behind us and both patents in our pocket, we are now able to proceed forward with our plans to serve our markets – selectively, of course,

Over a decade ago, I wrote a blog questioning why federal bills are written the way they are. For someone experienced with state legislation, federal legislation is quite cryptic and difficult to understand. It turns out that the style of amending law found in federal legislation is the common form and can be found around the world. In the U.S. Congress, this style is known as cut and bite1 amending. With this style, individual word changes are spelled out in a narrative form. For example, here is a section from California Assembly Bill 2748 from the current session:

SECTION 1.  Section 21377.5 of the Water Code is amended to read:

21377.5.  (a) Notwithstanding Section 21377 of this code or Section 54954 of the Government Code or any other provision of law, the Board of Directors of the Tri-Dam Project, which is composed of the directors of the Oakdale Irrigation District and the South San Joaquin Irrigation District, may hold no more than four regular meetings annually at the a Tri-Dam Project offices. The Board of Directors of the Tri-Dam Project shall adopt a resolution that determines the location of the Tri-Dam Project offices. office that is located in Sonora, California, or Strawberry, California, or within 30 miles of either city.

(b) The notice and conduct of these meetings shall comply with the provisions of the Ralph M. Brown Act (Chapter 9 (commencing with Section 54950) of Part 1 of Division 2 of Title 5 of the Government Code).

You can clearly see what changes are being made. However, if written using cut and bite amending, this equivalent section would read something like:

SECTION 1.  Section 21377.5 of the Water Code is amended by:

(a) Deleting the sixth “the” and replacing it with “a”.

(b) After “Tri-Dam Project”, deleting to the end of the subsection and replacing withoffice that is located in Sonora, California, or Strawberry, California, or within 30 miles of either city.”

This very terse form of amendment gives no context for the change being made. The change in subsection (a) is completely meaningless without any context. What this all means is that a politician tasked with approving these changes must do a significant amount of work to understand what the changes are all about and why they are being made.

Back in 2013, I questioned why the form found most of the U.S. states including my state of California wasn’t used. As seen in the example above, the U.S. states use a different approach to amending – known as amending in full. In this style of amending, the entire section containing the change is restated and the change is shown (some of the time) as stricken and inserted text, as you would find with track changes in a word processor. This approach has the benefit of making the change much clearer by providing the complete context of the change. In California, this approach is mandated by the State Constitution as amended by Proposition 1-a of 1966. This proposition was overwhelmingly approved by the voters of California. The Speaker of the California Assembly at the time, Jesse Unruh, had pushed through this constitutional amendment to establish professional legislature less beholden to special interests that other pressures that were undermining the effectiveness of the legislature at the time. His reforms were quite sweeping. Among the many changes, one part of this initiative was to make legislation more transparent.

The specific provision that was added, Section 9 of Article IV of the California Constitution, reads:

“A statute shall embrace but one subject, which shall be expressed in its title. If a statute embraces a subject not expressed in its title, only the part not expressed is void. A statute may not be amended by reference to its title. A section of a statute may not be amended unless the section is re-enacted as amended.”

This section of the California Constitution contains two rules. The first is the single subject rule, which limits the scope of each statute. The second is the re-enactment rule, which mandates the amendment in full approach, requiring that each section amended by re-enacting the whole section (essentially a repeal of the prior section and enactment of a new amended section as a single action). Most U.S. states have these same rules or very similar rules. These two rules go together. One of the worries of the re-enactment rule is that, by opening an entire section for re-enactment, unwelcome amendments might be added as part of the political process of winning votes. The single-subject rule is a guard against that behavior.

At the time of my original blog, I learned that adopting these rules in Congress would be impossible. For one thing, the U.S. Code is less regular than state codes or revised statutes, especially in the non-positive titles, and that re-enacting an entire section would be quite complex and could cause other difficulties. Re-enactment rules require consistently bite-sized sections. In addition, the House’s equivalent of the single subject rule, the germaneness rule adopted in 1789, didn’t have quite the same effectiveness as the single subject rule of California. Apparently, the Senate’s equivalent rules, found in the Senate Standing Rule XVI and Rule XXII had even more limited applicability.

As a result, I proposed in my blog that amendments in context be used. With amendments in context, a proposed bill is drafted using the amendments in full style that U.S. states use, from which a cut and bite style bill is generated using automation. With this approach, you get the best of both worlds. The bill is drafted in a form that is easy to understand and easy to manage while the worries of unleashing this amending style are circumvented by retaining the existing amending style for parliamentary procedures. However, at the time, the technology just wasn’t available. Under contract to the Law Revision Counsel, Xcential was just beginning down the path of converting the U.S. Code to processable XML that could feed the automation tools. While we had tools to offer that could do amendments in context, we were constrained by our agreement with California as to how much of our technology could be reused – they had been worried we could inadvertently undermine the successes of Proposition 1-a by empowering the special interests with our technology.

Today, more than a decade later, much has changed. The U.S. Code is available in an XML format we designed and a new more modern LegisPro is available that is both web-based and much more powerful than what we had back then. But there have been other changes too. The Posey Rule has been adopted requiring that a comparative print be provided alongside all proposed law showing how the law will be affected. This comparative print is also generated by Xcential technology and alleviates much of the problem by allowing politicians to understand more easily what it is they are voting on. However, it leaves the complexity of drafting and managing the process of creating and managing the cut-and-bite amendments still to be addressed.

This problem isn’t limited to U.S. federal bills. It’s a common problem wherever cut and bite amending is employed, particularly in Commonwealth countrie or countries with Westminster based legislative traditions, even if the term cut and bite isn’t used.

At Xcential, we’re going to return to our core mission – to make government processes better through technology. Our goal is improved efficiency, increased accuracy, and most importantly, better transparency for the benefit of the citizens.

  1. The term cut and bite is also sometimes used to refer to the form of amending used to propose amendments to bills themselves. Another term for these types of bill amendments are page and line amendments as they usually are expressed as references to page and line numbers rather than to provisions. ↩︎
Standard
Process, technology

Building an Agile Team

We’ve recently built our first true Agile development team. It’s been quite a learning experience, but now we’re seeing the results.

At Xcential, we have lots of waterfall process experience. Our backgrounds come from big waterfall companies like Boeing and Xerox. Over the years we’ve worked on very large projects in very traditional ways. In more recent years, we’ve also had a few Agile projects, largely initiated by customers, that have been good training grounds for us — for better or worse.

Like many companies, in recent years we’ve fallen victim to what the U.S. Department of Defense calls Agile BS — when you apply Agile terminology to your existing way of doing business. It’s a way to dilute Agile and turn it into nothing but a series of buzzwords. We’ve had sprints, standups, product owners, backlogs, and all the other bits of Agile — but we haven’t had the mindset that is necessary to make the Agile process work.

To build an Agile Team, we have needed to make a few key changes. First, we had assemble a team of developers who would gel together to become a performing team as fast as possible. Then, in order to overcome the inertia of the old way of doing things, we had to ensure that the team was trained to tackle the challenge in front of them. Finally, we have had to ensure that all the team members felt empowered to rise up and take ownership for their project.

An Agile team must be self-managing. This means that all the team members must feel the responsibility to deliver and have a commitment to do their part. Getting to that point has been a challenge — from getting management to let go and trust the team to getting the team members to step up and trust that their responsibilities are real.

I like to think of managing a team as being a game of chess. In a traditional arrangement, the managers are the back row while the developers and the interchangeable pawns in the front row — to be assigned here, there and everywhere.

In an Agile team, the roles are different. The team is self-managing. There is no front row and back row. Everyone has an important role in the team. This means that everyone should be challenged to step up to a bigger role than they would have had in a traditional team. While some team members are timid at first, having everyone feel empowered to play an important role is a key to the success of Agile.

We still have some challenges. Developers are still bouncing from one project to another. This discontinuity of effort shows as a reluctance to commit to the story points that will ultimately be necessary to complete the project in a timely way. It also distracts from our efforts to form team bonds. It’s hard to consider the team your home team when you’re feeling like a visitor to every team you work on.

Nonetheless, we’re starting to see real results from our prototype Agile team. Continuous integration procedures have been put in place ensuring a “done” product increment at the end of each sprint. For various reasons, delivery of these iterations to customers have not yet started, but this will be rectified at the end of the next sprint. We have peer reviews which are both improving the quality of the product with providing some degree of cross-training. The team’s velocity is improving, albeit at a slow rate. Over the next few sprints we will start integrating more and more with the other projects — and hopefully drawing them all into our new and more efficient way of building software.

Standard
Akoma Ntoso, Process

Legislative Archeology

One of the cool aspects of my job is that I get to work in different legislative traditions around the world. Sometimes it feels like a bit of an archeological dig, uncovering the history of how a jurisdiction came to be. Traditions are layered upon other traditions to result in unique practices that are clearly derived from one another.

While I am no scholar in the subject and I’ve yet to come across any definitive description of the subject, I find exploring the subject quite fascinating.

So far I’ve come across four distinct, but related, legislative traditions:

  1. The Westminster-inspired traditions found in the UK and around the world in the far reaches of the former British Empire.
  2. The U.S. Federal traditions which are a distinct variant of UK inspired legislation, but which have come quite different and complex in comparison. I think that the structure of the U.S. government, as specified by the U.S. Constitution, has led to substantial evolution of legislative practices.
  3. The U.S. state’s tradition, which are also a distinct variant of UK inspired legislation, but which have changed largely thanks to legislative reforms in the mid-twentieth century.
  4. European traditions which are largely similar to Westminster, but which tend to have their own unique twist, sometimes dating back to Roman times.

I generally simplify the four traditions based on few key characteristics which I find to be key distinguishers. It’s like looking at DNA and, while finding that a lot of the sequences remain the same, the are a few key differences that will reveal the genealogy of the jurisdiction.

UK traditions are generally layers and layers of statutes which are the law of the land. Bills either lay down new laws or amend existing law. Bills that only amend existing laws are often known as amending bills. It often seems that there are around seven hundred to a thousand base statutes. Subsidiary or secondary legislation, as in rules, regulations, etc., are quite closely related to primary legislation and is quite similar in structure.

US Federal traditions start the very slow process of re-compiling statutes as a single large code, the U.S. Code. As this process has been very slow and arduous, the result is a hybrid system with both a code and with statutes. The separation of powers causes subsidiary legislation to be far more distinct and the relationship to primary legislation is much less obvious.

U.S. States have also adopted codes (or in some cases, revised statutes) as a means to tidy up and arrange the laws in a more orderly fashion. In general, this task was undertaken in the mid-twentieth century and is complete. Another reform that came at the same time was a forced simplification of bills. Whereas Federal bills can become gigantic omnibus bills with lats

Standard
Process, technology, Uncategorized

GitHub Copilot — Is it the future?

Several months ago, I got admitted to the GitHub Copilot preview. For those of you who don’t know what Copilot is, it’s a AI-based plugin to Visual Studio Code that helps you by suggesting code for you to type. If you like, the suggestion, you hit tab, and on you go.

Join the GitHub Copilot waitlist · GitHub

It may sound like magic, and in some ways, it does seem like that. Apparently, it learns the vast base of open-source code found in the GitHub repositories. This, of course, has led to the inevitable charges that it violates fair use of that code and even that it will ultimately replace developer’s jobs much as factory automation has replaced workers. From my experience, this is more about sensationalism than anything real to worry about.

In my recent posts, I’ve covered the DIKW pyramid. It seems we’ve been stuck in the information layer for a long time, only barely touching the knowledge layer in very rudimentary ways. Yes, there are tools like Siri and Alexa which claim to be AI-based virtual assistants, but they just feel like a whole bunch or complicated programming to achieve something that is more annoying than helpful. There is Tesla Copilot for self-driving cars, but that just seems scary to me. (Full disclosure: I don’t even trust cruise control) To me, GitHub copilot is the first piece of software that truly seems to drive deep into the knowledge layer and even reach the wisdom layer. It’s truly simulating some sort of real smartness.

While the sensationalists love to make it seem that Copilot is lifting code from other people’s work and offering it up as a suggestion, I’ve seen nothing whatsoever that suggests that that is what it is doing. Instead, it truly seems to understand what I am doing. It makes suggestions that could only come from my code. It uses my naming conventions, coding standards, and even my coding style. It seems to have analyzed enough of the code base in my application to understand what local functions and libraries it could draw upon. The code it synthesizes are obviously built on templates that it’s derived by learning. But those templates aren’t just copies of other people’s work. This is how synthesis works in the CAD world I come from (actually, it’s a bit more sophisticated that the synthesis I knew in CAD many years ago) and this is a natural next step in coding technologies.

I’ve been experimenting with what Copilot can do — how far reaching its learning seems to be. It’s able to help me writing JavaScript. What it is able to suggest is remarkable. However, coding assistance is not its only trick. It even helps with writing comments — sometimes with a bit of an attitude too. Last week I was adding a TODO: comment into the loader part of LegisPro to note that it needed to be modernized. Copilot’s unsolicited suggestion for my comment was “Replace the loader with a real loader”. Thanks Copilot. As Han Solo once said, “I’m not really interested in your opinion 3PO”.

Of course, this all leads to the inevitable question. Can it be trained to write legislation? Much to my surprise, it seemingly can. How and why it knows this is completely unknown to me. It’s able to suggest basic amending language and seems to know enough that it can use fragments of quotes from Thomas Jefferson and Benjamin Franklin. I find it incredible that it can even understand the context of legislation and that I did not have to tell it what that context was.

So am I sold on this new technology? Well, yes and no.

It’s not the scary source code stealing and eavesdropping application some would make it out to be. The biggest drawback to it is the same reason I don’t even trust cruise control in my car. It’s not that I don’t trust the computer. It’s that I don’t trust myself to not become lazy and complacent and come to believe the computer is right. I’ve already come across a number of situations where I’ve accepted Copilot’s suggestion without too much thought, only to needlessly wasting hours tracking down a problem that would never have existed if I had actually taken the time to write the code.

It’s an interesting technology, and I believe it’s going to be am important part of how software development evolves in the coming years. But as with all new technologies, it must be adopted with caution.

Standard
Process, technology, Track Changes

Moving on Up to Document Synthesis

In my last blog, I discussed the DIKW pyramid and how the CAD world has advanced through the layers while the legal profession was going much slower. I mentioned that design synthesis was my boss Jerry’s favorite topic. We would spend hours at his desk in the evening while he described his vision for design synthesis — which would become the norm in just a few years.

Jerry’s definition of design (or document) synthesis was quite simple — it was the processing of the information found in one document to produce or update another document where that processing was not simple translation. In the world of electronic design, this meant writing a document that described the intended behavior of a circuit and then having a program that would create a manufacturable design using transistors, capacitors, resistors, etc. from the behavioral description. In the software world, we’ve been using this same process for years, writing software in a high-level language and then compiling that description into machine code or bytecode. For hardware design, this was a huge change — moving away from the visual representation of a schematic to a language-based representation similar to a programming language.

In the field of legal informatics, we already see a lot of processes that touch on Jerry’s definition of document synthesis. Twenty years ago, it was seeing how automatable legislation could be, but wasn’t, that convinced me that this field was ready for my skills.

So what processing do we have that meets this definition of document synthesis:

  • In-context amending is the most obvious process. Being able to process changes recorded in a marked up proposed version of a bill to extract and produce a separate amending document
  • Automated engrossing is the opposite process — taking the amending instructions found in one document to automatically update the target document.
  • Code compilation or statute consolidation is another very similar process, applying amending language found in the language of a newly enacted law to update pre-existing law.
  • Bill synthesis is a new field we’ve been exploring, allowing categorized changes to the law to be made in context and then using those changes and related metadata to produce bills shaped by the categorization metadata provided.
  • Automated production of supporting documents from legislation or regulations. This includes producing documents such as proclamations which largely reflect the information found within newly enacted laws. As sections or regulations come into effect, proclamations are automatically published enumerating those changes.

In the CAD world, the move to design synthesis required letting go of the visually rich but semantically poor schematic in favor of language-based techniques. Initially there was a lot of resistance to the idea that there would no longer be a schematic. While at University, I had worked as a draftsman and even my dad had started his career as a draftsman, so even I had a bit of a problem with that. But the benefits of having a rich semantic representation that could be processed quickly outweighed the loss of the schematic.

Now, the legislative field is wrestling with the same dilemma — separating the visual presentation of the law, whether on paper or in a PDF, from the semantic meaning found within it. Just as with CAD, it’s a necessary step. The ability to process the information automatically dramatically increases the speed, accuracy, and volume of documents that can be processed — allowing information to be produced and delivered in a timely manner. In our society where instant delivery has become the norm, this is now a requirement.

Standard
Process, technology, Uncategorized

The Knowledge Pyramid

At the very start of my career at the Boeing Company, my boss Jerry introduced the Knowledge Pyramid the DIKW Pyramid to me one evening. I had an insatiable thirst for learning and he would spend hours introducing me to ideas he thought I could benefit from. To me, this was a profound bit of learning that would somewhat shape my career.

At the time, I was working in CAD support, introducing automation technologies to the various engineering project’s around the Boeing Aerospace division. The new CAD tools were running on expensive engineering workstations and were replacing largely homegrown minicomputer software from the 1970’s.

Jerry explained to me that the legacy software, largely batch tools, that crunched data manually input from drawings represented the data layer. The CAD drawings our tools produced actually represented a digital representation of the designs with sufficient information for both detailed analysis and manufacturing. It would take a generation of new technologies to advance from one layer to the next in the DIKW pyramid — with each generation lasting from ten to twenty years. His interest was in accelerating that pace and so we studied, as part of our R&D budget, artificial intelligence, expert systems, language-based design techniques, and design synthesis.

While data was all about crunching numbers, information was all about understanding the meaning of the data. Knowledge came from being able to use the information to synthesize (Jerry’s favorite topic) new information and to gain understanding. And finally, wisdom came from being able to work predictively based on that understanding.

When I was introduced to legal informatics in the year 2000, it was a bit of a time warp to me. While the CAD world had advanced considerably and even design synthesis was now the norm, legal informatics was stuck in neutral in the data processing world of the late 1970s and early 1980s. Mainframe tools, green screen editors, and data entry was still the norm. It was seeing this that gave me the impetus to work to advance the legal field. The journey I had just taken in the CAD world of the prior 15 years was yet to be taken in the legal field. The transition into information processing was to start with the migration to XML — replacing the crude formatting oriented markup used in the mainframe tools with modern semantic markup that provided for a much better understanding of the meaning of the text.

To say the migration to the future has gone slowly would be an understatement. There are many reasons why this has happened:

  • The legacy base of laws have to be carried along — unchanged in virtually every way. This would be like asking Boeing to advance their design tools while at the same time requiring that every other aircraft design ever produced by the company in the prior century also be supported. For law, it a necessary constraint, but also a tremendous burden.
  • The processes of law are bound by hard-to-change traditions, sometimes enshrined by the constitution of that jurisdiction. This means the tools must adapt more to the existing process than the process can adapt to the tools. Not only does this constraint require incredibly adaptable tools, it is very costly and dampers the progress that can be made.
  • The legal profession, by and large, is not technology driven and their is little vision into what can be. The pressure to keep things as they are is very strong. In the commercial world, companies simply have to advance or they won’t be competitive and will die. Jurisdictions aren’t in competition with one another and so the need to change is somewhat absent.

For advancements to come their needs to be pressure to change. Some of this does come naturally — the hardware the old tools run on won’t last forever. New legislators entering into their political careers will quickly be frustrated by the archaic paper-inspired approach to automation they find. For instance, viewing a PDF on a smartphone is not the best user experience. It is that smartphone generation that will drive the need to change.

Over the next few blogs, I’m going to explore where legal informatics is on the DIKW pyramid and what advancements on the horizon will move us up to higher levels. I’ll also take a look at new software technologies that point the way to the future — for better or worse.

Standard
LEX Summer School, Process, technology, Uncategorized

Escaping a Technology Eddy

Do you need to escape a technology eddy? In fluid dynamics, an eddy is the swirling of a fluid that causes a reverse current against a downstream flow. It often forms behind a major obstacle. The swirling motion of an eddy creates resistance to forward motion by creating a backward force. Eddies are also seen in air and electromagnetic systems.

I see a similar phenomena in my work that I’m going to coin a technology eddy. A technology eddy forms in organisations that are risk adverse, have restricted budgets, or simply are more focused on software maintenance of a major system rather than on software development. Large enterprises, in particular, often find their IT organisations trapped in a technology eddy. Rather than going with the flow of technological change, the organisation drifts into a comfortable period where change is more restricted to the older technologies they are most familiar with.

TechnologyEddy

As time goes by, an organisation trapped in a technology eddy adds to the problem by building more and more systems within the eddy — making it ever more difficult to escape the eddy when the need arises.

I sometimes buy my clothing at Macy’s. It’s no secret that Macy’s, like Sears, is currently struggling against the onslaught of technological change. Recently, when paying for an item, I noticed that their point-of-sale systems still run on Windows 7 (or was that Windows Vista). Last week, on the way to the airport, I realised I had forgotten to pack a tie. So, I stopped in to Macy’s only to find that they had just experienced a 10 minute power outage. Their ancient system, what looked to be an old Visual Basic Active Directory app, was struggling to reboot. I ended up going to another store — for all the other stores in the mall were up and running quite quickly. The mall’s 10 minute power outage cost Macy’s an hour’s worth of sales because of old technology. The technology eddy Macy’s is trapped in is not only costing them sales in the short term, it’s killing them in the grand scheme of things. But I digress…

I come across organisations trapped in technology eddies all the time. IT organisations in government are particularly susceptible to this phenomena. In fact, even Xcential got trapped in a technology eddy. With a small handful of customers and a focus on maintenance over development for a few years, we had become too comfortable with the technologies that we knew and the way in which we built software.

It was shocking to me when I came to realise just how out-of-date we had become. Not only were we unaware of the latest technologies, we were unaware of modern concepts in software development, modern tools, and even modern programming styles. We had become complacent, assuming that technology from the dawn of the Millennium was still relevant.

I hear a lot of excuses for staying in a technology eddy. “It works”, “all our systems are built on this technology”, “it’s what we know how to build”, “newer technologies are too risky”, and so on. But there is a downside. All technologies rise up, have a surprisingly brief heyday, and then slowly fade away. Choosing to continue within a technology eddy using increasingly dated technology ensures that sooner or later, an operating system change or a hardware failure of an irreplaceable part will create an urgent crisis to replace a not-all-that-old system with something more modern. At that point, escaping the eddy will be of paramount importance and you’ll have to paddle at double speed just to catch up. This struggle becomes the time when the price for earlier risk mitigation will be paid — for now the risks will compound.

So how do you avoid the traps of a technology eddy? For me, the need to escape our eddy became most apparent as we got exposed to people, technologies, and ideas that were beyond the comfortable zone in which our company existed. Hearing new ideas from developers beyond our sphere of influence and being exposed to requirements from new customers made us quickly realize that we had become quite old-fashioned in our ways. To stay relevant you must get out and learn — constantly. Go to events that challenge your thinking rather than reinforce it.

Today we are once more a state-of-the-art company. We’ve adopted modern development techniques, upgraded our tools, upgraded our technologies, and upgraded our coding skills. These changes allow us to compete worldwide and build software for multiple customers in a fully distributed way that spans companies, continents, and time zones.

I hope we’ll remember this lesson and focus more on continuous improvement rather than having to endure a crash course of change every few years.

 

Standard
Process, Uncategorized

Becoming Agile

Lately we’ve become quite Agile. More and more, our government customers have started to impose Agile methodologies on us. While I’ve always thought of our existing methodologies as being quite nimble, adopting Agile and Scrum methodologies has required some adaptation on my part.

Early in the game, I started to find Agile to be more of a hindrance than a help. The drumbeat of each sprint was wearing me out – and I started to feel the inevitable effects of burnout creeping into the my every thought.

But then a remarkable thing happened. I found myself not only defending Agile, but advocating it for our other projects. I was quite surprised to find myself having become such a big supporter. So what changed?

Early on, Agile was new for all of us. Our team was new, geographically distributed in three different parts of the world, all 8 hours apart. That team consisted of representatives from a set of customers and several partners all learning to work together to build a challenging solution. We adopted the Scrum methodology and planned out a long series of two week sprints. Each sprint had a set of stories assigned to it as we set off to build the most awesome bill drafting system of all time.

ProgressVsRefinement

The problem was that the pace was too aggressive. In a software development project, you need to manage two different aspects – making forward progress by adding features while ensuring a sound implementation through refinement. Agile methodologies lean away from lots of up-front design. This makes it possible to show lots of forward momentum early, but the trade-off is that the design will need to be refactored often as new requirements are uncovered and added to the picture. We were too focused on the forward momentum and were leaving a trail of unfinished “programming debt” in our wake. This debt was causing me increasing anxiety as time marched on.

There is an important concept in Agile Scrum called the retrospective. It’s all about continuous improvement of the process. As we’ve grown as a team, we’ve become better at implementing retrospectives. These led to the most important change we’ve made – moving from a two week to a three-week sprint. We didn’t just add time to our sprints, we fundamentally changed the structure of a sprint. We still schedule two weeks’ worth of tasks to each sprint, but rather than just assuming that everything will work out just perfectly, we leave a week open for integration, testing, and development slack to be taken up by any refactoring that may have become necessary.

BritSprint

This third week, while arguably slowing us down, ends up helping by allowing us to emerge from each sprint in far better development shape to begin the next sprint. We just have to be disciplined enough to not try and squeeze regular development tasks into that third week. By working down programming debt continuously, subsequent sprints become more predictable. For various reasons, we temporarily returned to two week sprints and the problem of accumulating programming debt returned. The lesson learned is that you can’t build a complex system on top of a rickety foundation – you must continuously work to ensure a robust base upon which you are building. Without this balance, Agile just becomes a way to expedite a project at the expense of good development practices.

Another key change has been in how we use tools that help to do our work. As I mentioned earlier, our development teams are very distributed – around the world. It’s important that we be able to communicate very effectively despite the distance. Daily stand-ups with the entire team are not possible although we do ensure at least two meetings each sprint with the whole team. We use four primary tools – GitHub as our source code repository, AWS for our development and test servers, Slack for casual day-to-day conversation, and JIRA for managing the stories and tasks. It is the use of JIRA that has taken the most adaptation. Our original methodology was quite clumsy, but with each sprint we refine our usage to the point that it has become a very effective tool. Now, a dashboard presents me with a very clear picture of each sprint’s goals and everyone can monitor the progress towards those goals as the sprint progresses – there are no surprises.

Agile and Scrum are allowing a disparate group of customers and vendors to become a very highly performing software development team. We’re far from perfect, but with every sprint we learn more, make changes, and emerge as a better team than before.

 

Standard
Process, Transparency

Changing the way the world is governed. Together.

I’ve recently been marveling at how software development has changed in recent years. Our development processes are increasingly integrated with both our government customers and our commercial partners — using modern Agile methodologies. This largely fulfills a grand vision I was a part of very early in my career.

I started my career at the Boeing Company working on Internal Methods and Processes Development (IMPD). Very soon, the vision that came about was the idea of Concurrent Engineering where all aspects of the product development cycle, including all disciplines, all partners, and all customers, were tightly integrated in a harmonious flow of information. Of course, making the vision a reality at Boeing’s scale has taken some time. Early on, Boeing had great success on the B777 programme where the slogan was “Working Together“. A bit later, with the B787 programme where they went a few (or perhaps many) steps too far, they stumbled for a while. This was all Agile thinking — before there was anything called Agile.

Boeing’s concurrent engineering efforts quickly inspired one of Boeing’s primary CAD suppliers, Mentor Graphics. Mentor was hard at work on their second generation platform of software tools for designing electronic systems. Concurrent Engineering was a great customer-focused story to wrap around those efforts. Mentor’s perhaps arrogant tagline was “Changing the way the world designs. Together.” Inspired, I quickly joined Mentor Graphics as the product manager for data management. Soon I was to find that the magnitude of the development effort had actually turned the company sharply inward and the company had become anything but Agile. Mentor’s struggle to build a product line that marketed Concurrent Engineering became the very antithesis of the concept it touted. I eventually left Mentor Graphics in frustration and drifted away from process automation.

Now, two decades later, a remarkable thing has happened. All those concepts we struggled with way back when have finally come of age. It has become the way we naturally work — and it is something called Agile. Our development processes are increasingly integrated with both our customers and our partners around the world. Time zones, while still a nuisance, have become far less of a barrier than they once were. Our rapid development cycles are quite transparent, with our customers and partners having almost complete visibility into our repositories and databases. Tools and services like GitHub, AWS, Slack, JIRA, and Trello allow us to coordinate the development of products shared among our customers with bespoke layers built on top by ourselves and our partners.

ConcurrentEngineering.png

It’s always fashionable for political rhetoric to bash the inefficiencies of big government, but down in the trenches where real work gets done, it’s quite amazing to see how modern Agile techniques are being adopted by governments and the benefits that are being reaped.

As we at Xcential strive to become great, it’s important for us to look to the future with open eyes such that we can understand how to excel. In the new world, as walls have crumbled, global integration of people and processes has become the norm. To stay relevant, we must continue to adapt to these rapidly evolving methodologies.

Our vision is to change the way the world is governed — through the application of modern automation technology. One thing is very clear, we’re going to do it together with our customers and our partners all around the world. This is how you work in the modern era.

In my next blog post, I will delve a little more into how we have been applying Agile/Scrum and the lessons we have learned.

Standard