Tech Policy Should Look Like Engineering Requirements

Why do some regulations fail to achieve their goals?

Are they doomed to fail from the start?

In 1973, oil-exporting Arab countries embargoed shipments to countries that had supported Israel in the Yom Kippur War of that same year. This included the United States. Congress, in an attempt to reduce fuel consumption at home, signed the first Corporate Average Fuel Economy (CAFE) standards into law as a result.

These standards stuck around and evolved to focus on global warming and emissions instead. Around 2007, Congress rewrote the legislation to deal with some (legitimate) concerns. They created different efficiency targets for different vehicle "footprints" (base size, length times width) with larger vehicles having lower mileage per gallon requirements.

While the legislation was intended to reduce the amount of fuel consumed, it actually just incentivized manufacturers to build larger vehicles and stop selling small ones. The CAFE standards killed the station wagon, birthed the modern SUV and luxury pickup truck, and failed to get us to the goal of 50 MPG average efficiency by 2025.

There's a similar story about unintended side effects from the British colonial administration of India. The government was concerned about the number of cobras in Delhi, and offered a bounty for each dead cobra brought in by hunters. The locals hunted cobras and were rewarded.

However, some of them started breeding cobras to take in and claim bounties with. As soon as the British got wind of the scam, they canceled the program. The scammers released their captive cobras, and the city was worse off than before. This idea of regulations creating "perverse incentives" is even called the "cobra effect."

These are both famous examples of policy having unintended side effects. But why did CAFE fail to achieve the (ambitious) goals Congress set in 2007? Was it doomed to fail from the start?

CAFE's story isn't unique. A couple of conversations I've had in the past six months got me thinking: regulations that suffer from the cobra effect have something in common. They're not written like engineering requirements, yet engineers have to follow them. In this post I'll touch on a couple other examples, and hopefully leave you wondering whether engineers could have written them better.

How Lawyers Screwed Up

GDPR and the Cost of Compliance

Since 2018 the European Union has required all organizations which handle EU citizens' data to comply with the General Data Protection Regulation (GDPR). This legislation was promoted as a way to give consumers power over how their data was used, and break the stranglehold of data-collecting/advertising monopolies like Google.

Well, it didn't. In fact, it actually made the advertising space less competitive. Big companies like Google easily absorbed the cost of GDPR compliance, while smaller companies and startups had their margins squeezed out by the extra burden. Companies like Microsoft, which is famously a law firm with a software engineering division attached, were already dealing with large-scale compliance tracking. Smaller firms were not, and were more impacted than the big ones.

GDPR and, to a lesser extent, other regulations like it (in the UK, California, Texas, and some other states) are more examples of a well-intentioned piece of legislation written by lawyers having the opposite effect they wanted. It fortified the position of the monopolies instead of breaking it.

To get a feel for how the GDPR reads, let's look at how Article 32 (Security of processing) opens:

Taking into account the state of the art, the costs of implementation and the nature, scope, context and purposes of processing as well as the risk of varying likelihood and severity for the rights and freedoms of natural persons, the controller and the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate:
(1) the pseudonymisation and encryption of personal data;
(2) the ability to ensure the ongoing confidentiality, integrity, availability and resilience of processing systems and services;
(3) the ability to restore the availability and access to personal data in a timely manner in the event of a physical or technical incident;
(4) a process for regularly testing, assessing and evaluating the effectiveness of technical and organisational measures for ensuring the security of the processing.

What is a "level of security appropriate to the risk?" What does it mean to take into account "costs of implementation?" How available and resilient should processing systems be? What does "timely manner" mean? All of these questions are (intentionally) left to the courts to decide, but that creates friction and uncertainty.

Remember, GDPR was brought to you by the same geniuses that created cookie pop-ups. We'll come back to this later.

America's EAR, FDPR, and Semiconductors

To slow the PRC's progress in designing advanced chips, the US government has applied export controls and other restrictions, with varying levels of effect. The regulations we'll discuss here were intended to hinder China's progress and protect American companies.

The US uses the Export Administration Regulations (EAR) as the primary framework for export controls. An entire agency, the Bureau of Industry and Security, handles the task of preventing sensitive technology from being sold to the wrong people. This is not a bad thing; our foreign trade shouldn't directly hurt our national interests. However, as a side effect, the regulations also hurt America's position in the semiconductor supply chain.

Before 2020, any foreign-made products containing more than 25% American-origin components were subject to US export controls. This was the de minimis rule, and it meant a device (e.g. a graphics card) made in Taiwan could be export-controlled by the US if more than 25% of the parts were American. Since the US doesn't want China to have access to advanced chips or chipmaking equipment, that card couldn't be sold to China.

However, the PRC has a population of over a billion people. It's a huge market. Of course companies want to sell to China. So they started "designing out" American components and swapping them for Japanese or Korean components instead, to keep the American-origin percentage below 25%. Then they could sell their product to China without being subject to changing export controls.

In 2020 the Foreign Direct Product Rule (FDPR) was expanded to fight the de minimis loophole. Now, all products designed with US software (e.g. Cadence EDA tools) or made with US-designed machines (e.g. semiconductor fabs) are subject to US export controls regardless of what percentage of the finished product is American.

This greatly expanded the scope of US export controls, but it also drove companies doing business with China to transition away from US toolchains. Huawei is swapping US-sourced RF chips for Chinese ones, SMIC stockpiled Japanese lithography equipment, and European companies like STMicroelectronics gained market share by positioning themselves as "independent."

Asian and European companies benefitted from a "de-risking" spending spree by Chinese companies while American firms lost revenue and Japan kept supplying "mature node" (read: old) chip tech to China. The policies are slowing China's growth in semiconductors, but they've also harmed the competitiveness of American firms in the process.

I'm not going to pretend I know better on this one. National security and economic policy are full of cost-benefit analyses and weighing of trade-offs, and I have nowhere near a complete picture of what's involved. But surely a more coherent effort could have stopped advanced tech sales without incentivizing "de-risking" and "designing out" strategies in friendly countries.

How Engineers Could've Done Better

Engineers write product requirements that lay out expectations for a completed system. These requirements have some features that CAFE, the GDPR, and America's export control regime all lack.

Some relevant properties of engineering requirements are these:

  • Atomicity: Compound statements are disallowed. An individual statement contains a single atomic requirement.
  • Unambiguity: Subjective words like "timely" or "safe" aren't helpful; the courts just wind up defining them later.
  • Verifiability: There is a feasible way to check that the requirement is met. This means you need a metric you can test.
  • Quantifiability: The targets are specified precisely, so that they're unambiguous and verifiable.
  • Consistency: Individual requirements don't contradict each other without defining a trade-off. You generally can't design a part with 300% margins that's also weight-efficient.
  • Completeness: There are no unhandled edge cases, and negative requirements (what the system should not do) are included.

Would regulations be improved if their authors tried to apply these principles? I think so.

ADA & Requirements Done Right

The Americans with Disabilities Act (ADA) changed the way our buildings and cities look by requiring accessibility be baked into designs from the start. It's had a massive positive impact, even for people without disabilities. It's a case study in writing requirements-based regulation.

As an example, let's look at an excerpt of Title IV of the ADA.

The [Federal Communications] Commission shall, not later than 1 year after July 26, 1990, prescribe regulations to implement this section, including regulations that—
(A) establish functional requirements, guidelines, and operations procedures for telecommunications relay services;
(B) establish minimum standards that shall be met in carrying out subsection (c) of this section;
(C) require that telecommunications relay services operate every day for 24 hours per day;
(D) require that users of telecommunications relay services pay rates no greater than the rates paid for functionally equivalent voice communication services with respect to such factors as the duration of the call, the time of day, and the distance from point of origination to point of termination;
(E) prohibit relay operators from failing to fulfill the obligations of common carriers by refusing calls or limiting the length of calls that use telecommunications relay services;
(F) prohibit relay operators from disclosing the content of any relayed conversation and from keeping records of the content of any such conversation beyond the duration of the call; and
(G) prohibit relay operators from intentionally altering a relayed conversation.

This act empowers the FCC to set up hard engineering requirements for public infrastructure (in this case, telecomms equipment), so it's accessible to everyone. It's a brilliant and thoughtful piece of legislation.

One of the critiques of regulations that look like engineering requirements is that they're "brittle." If a piece of legislation had, say, mandated SHA-1 as a hashing algorithm, it would be outdated today since SHA-1 is insecure. An amendment would be needed. By assigning an agency the power to update the specifics, you get clear engineering requirements that can be updated as technology evolves.

You can read the regulations implemented by the FCC here. No matter where you scroll, it feels a lot like an actual Product Requirements Document, but look at the section I linked. Each line lays out an atomic, verifiable, quantifiable, and unambiguous requirement. An engineer can test a system or building and determine whether it's compliant.

I'm not saying the ADA hasn't created some perverse incentives. It has, especially around hiring decisions. But in general, the technical aspects are well-written and builders can reliably verify compliance.

Back to the GDPR

The GPDR, on the other hand, violates every principle in the list, but perhaps it violates "unambiguity" most egregiously. Many terms in it lack hard definitions, and in the absence of rulings from the Court of Justice of the European Union (CJEU), lower authorities have issued directives and advice.

In September the CJEU delivered a judgement on whether "pseudonymized" data is personal data. The court considers a dataset "personal data" if the intended recipient is able to reconstruct the identities of the people involved. However, the current guidance from the European Data Protection Board (EDPB) considers pseudonymized data "personal data" in all cases. The EDPB directive is more strict than the current legal precedent, so the Board and the Court disagree.

If the term "personal data" was better defined, this conflict would be impossible, but at the moment, it smells like a loophole. I could offload data processing to a third party without having to keep up with GDPR compliance, as long as they don't have access to the identifiers or look-up table. According to the court, this is okay. According to the EDPB, I am about to be fined for non-compliance. Who's right?

I'm convinced the ADA offers a better model. If all the GDPR did was empower the Data Protection Board to write specific requirements, the Board's definition would be the only definition. But that's not what happened, and now the EU is stuck with ambiguous definitions that must be hammered out in court.

Fixing CAFE

Like GDPR, I think CAFE violates the properties of good requirements, but in a different way. The issue with CAFE is more about "consistency" than "unambiguity." You cannot simultaneously ask for a more efficient vehicle fleet and allow vehicles to be less efficient if they are larger. Companies will just build larger vehicles to avoid the extra engineering effort.

CAFE failed because it "punished" the wrong thing. Congress wanted to reduce fuel usage. Yet the standards made it harder to build smaller vehicles. Shouldn't the regulations have made it harder to guzzle gas instead?

I'm no economist, but it seems like making more efficient cars cheaper, and making less efficient cars more expensive, would incentivize people to buy more efficient cars. Maybe a flat tax on less efficient vehicles would have the desired effect? I don't see it leading to ever-larger SUVs as a side effect, so perhaps it's already an improvement.

Good Requirements & Bad Requirements

This comes down to a fundamental idea in systems engineering: requirements should be descriptive, not prescriptive.

I'll say it again. Requirements should be descriptive, not prescriptive.

A descriptive requirement describes what things the system shall and shall not do. A prescriptive requirement prescribes how the system shall do "the things."

I picked up an example at Trane. One of the mechanical engineers was drafting some requirements for firmware in our lab automation set-up. He prescribed a seach algorithm: "iterate through the list until you've found the key." One of the software guys explained that we have better search algorithms, but his prescriptive requirement would block us from using them. He refactored it into a descriptive requirement, "the system shall locate the key in the list," and everyone was happy.

CAFE used prescriptive requirements to tell companies how efficient cars of certain sizes must be, instead of describing the objective: "less efficient cars shall be less attractive to operate." Congress tried to be too clever, and wound up creating a perverse incentive.

Consistent Export Controls

As with CAFE, Congress tried to be too clever with the 25% de minimis threshold. If the objective was to block sales of advanced technologies to China, that should have been the law. Instead, Congress said, "any advanced technology with more than 25% American components cannot be exported to China."

That created a perverse incentive to swap American-made components with foreign ones, so companies could keep the American-origin share below 25%. For years, that impacted America's position in the semiconductor industry. Congress and the federal government eventually course-corrected by implementing FDPR and getting other nations like the Netherlands and Japan to agree on sanctions. The question is, why didn't we go for that from the start?

The failure here was an example of Goodhart's Law:

When a measure becomes a target, it ceases to be a good measure.

The de minimis threshold (measure) became a target, and ceased to fulfil the purpose of blocking advanced tech from entering China. The FDPR is a better method of determining what is export-controlled and what is not, because there's no arbitrary measure companies can game. So the first Trump administration and the Biden administration actually fixed this regulatory issue by moving to more descriptive, more consistent requirements.

Again, national security and trade policy are especially complicated, and I'm not claiming to know better. I just think a more coordinated strategy could have minimized side effects.

Conclusion

The cobra policy, CAFE, the GDPR, and the EAR's de minimis rule all created perverse incentives. CAFE and the EAR were too prescriptive. The GDPR was too ambiguous. In contrast, ADA is generally unambiguous and consistent.

To be clear, I'm not saying a "product requirements mindset" would fix every piece of legislation, or that every piece of legislation should look like the ADA. I'm saying that the product requirements mindset can improve regulations which engineers have to follow.

If policy-makers drew on the principles of good requirements in their regulations, I think verifying compliance would be easier and creating perverse incentives would be harder. But so many of our politicians are lawyers by training (I think there's exactly one engineer in Congress), so they probably aren't used to writing "good requirements." They're used to writing legal documents. I'm not trying to bash the lawyers; they're just not usually exposed to systems engineering (as far as I know!)

This is one of those areas where people with a lot of different backgrounds have to collaborate. Governments aren't going to get tech policy right unless technical people can weigh in. I learned a lot from writing this post, and I'm interested in reading more in this space. If you have an example of regulation done well, or more stories about perverse incentives, I'd love to hear about it. If you think this idea is ridiculous, or that I've overlooked something important, let me know! You can email me at ethan.barry@howdytx.technology or reach out via LinkedIn.

Thanks for reading!