Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.
Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.
This post outlines a very theoretical and abstract version of this idea. Any practical implementation would be much messier. I haven’t thought about this topic in great depth and I expect my views will change substantially over time.
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).
Heavy technology is subject to two rules:
- You can’t use heavy technology in a way that is unacceptably destructive.
- You can’t use heavy technology to undermine the machinery that enforces these two rules.
To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.
For example, a modern factory would be under surveillance to ensure that its operation doesn’t violate these rules. As a special case of rule #2, the factory could not be used to produce heavy technology without ensuring that technology is appropriately registered and monitored. The enforcement rules would require the factory be defended well enough that a small group (including the owner of the factory!) could not steal heavy machinery from the factory or use it illicitly. Because a small group would not have unrestricted access to any heavy technology, it might be very easy for a small amount of heavy technology to defend the factory.
The cost of this enforcement would be paid by people who make heavy technology. Because heavy technology can only be created by using other heavy technology (which are under surveillance) or by large groups (which have limited ability to coordinate illegal activities), it is feasible for law enforcement to be aware of all new heavy technology and ensure that it is monitored.
Sometimes this surveillance and enforcement can be provided by the technology itself. For example, computers could be built so that they can only perform approved computations (I realize this is objectively dystopian). An attacker using heavy technology would almost certainly be possible to circumvent restrictions, but a would-be attacker has access only to light technology. The technology may need to monitor its environment, phone home to law enforcement if things looks weird, and potentially be prepared to disable itself (e.g. with explosives).
In order to relax requirements on some type of heavy machinery, e.g. to release it to unmonitored consumers, the producer needs to convince regulators that it doesn’t constitute a threat to these rules. This could be due to inherent limitations of the technology (e.g. a genetically modified extra-juicy pineapple is not threatening) or because of restrictions that will be hard for an individual or small group to circumvent (e.g. the computer described above that only runs software that has been approved by law enforcement, and the defense looks solid). If I want to release heavy technology, I have to pay for the costs of the evaluation process to determine whether it is safe.
These evaluations could be organized hierarchically. At the “root” are very detailed and cautious evaluations that are expensive and carried out rarely. A root evaluation wouldn’t just approve a single object, it would specify a cheaper process that could be used to approve certain kinds of items. For example, I could propose a simpler process for evaluating new materials, and perform an extensive evaluation to convince regulators that this simple process is reasonably secure. Then rather than having an extensive evaluation when I want to release a new plastic, I can follow this much cheaper new-material-approval-process. There could be several levels of delegated evaluations.
Ideally we’d engage in red team exercises to probe enforcement mechanisms and ensure that they were adequate.These could apply to every step of the process, from finding creative ways to use light technology to cause unacceptable destruction, to beating the enforcement and security mechanisms around heavy technology, to making unsound proposals for relaxing restrictions on heavy technology. Red teams could be better-financed and organized than plausible criminal organizations and should have a much lower standard for success than “could have destroyed the whole world.”
This proposal has several advantages, relative to mass surveillance:
- It allows people to continue living unmonitored lives, doing all the things they were able to do in 2017. In particular, people are free to say what they want, organize politically, trade with each other, study whatever topics they want, and so on.
- It allows individuals to continue understanding and improving technology, without requiring a high degrees of secrecy about key technologies or radically slowing technological progress.
- It provides a (relatively) clear and limited mandate for surveillance—with appropriate laws, it would require significant overreach for this surveillance to e.g. have a decisive political effect.
- It probably allows releasing lots of heavy technology to consumers without much extra burden, by inserting safeguards that are secure against attackers with only light technology.
Relative to no surveillance, the advantage of this proposal is that it stops some random person from killing everyone. Realistically I think “don’t do anything” is not an option.
This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.
This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
This proposal puts things in the “dangerous” category by default and considers them safe only after argument. We could take a different default stance; I don’t have a strong view on this. In reality I expect that most particular domains will be governed by more specific norms that overrule any global default.
The cost of this proposal grows continuously as the amount of heavy technology increases, starting from a relatively modest level where only a few kinds of technology need to be monitored. Even once heavy technology is ubiquitous, this proposal is probably not much more expensive than mass surveillance and might be cheaper. Any kind of surveillance is made much cheaper by sophisticated AI, and this proposal is no different.