Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.
Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.
This post outlines a very theoretical and abstract version of this idea. Any practical implementation would be much messier. I haven’t thought about this topic in great depth and I expect my views will change substantially over time.
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).
Heavy technology is subject to two rules:
- You can’t use heavy technology in a way that is unacceptably destructive.
- You can’t use heavy technology to undermine the machinery that enforces these two rules.
To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.
For example, a modern factory would be under surveillance to ensure that its operation doesn’t violate these rules. As a special case of rule #2, the factory could not be used to produce heavy technology without ensuring that technology is appropriately registered and monitored. The enforcement rules would require the factory be defended well enough that a small group (including the owner of the factory!) could not steal heavy machinery from the factory or use it illicitly. Because a small group would not have unrestricted access to any heavy technology, it might be very easy for a small amount of heavy technology to defend the factory.
The cost of this enforcement would be paid by people who make heavy technology. Because heavy technology can only be created by using other heavy technology (which are under surveillance) or by large groups (which have limited ability to coordinate illegal activities), it is feasible for law enforcement to be aware of all new heavy technology and ensure that it is monitored.
Sometimes this surveillance and enforcement can be provided by the technology itself. For example, computers could be built so that they can only perform approved computations (I realize this is objectively dystopian). An attacker using heavy technology would presumably be able to circumvent these restrictions, but in fact would-be attackers will only have access to light technology and so be at a large disadvantage. The technology may need to monitor its environment, phone home to law enforcement if things looks weird, and potentially be prepared to disable itself (e.g. with thermite).
In order to relax requirements on some type of heavy machinery, e.g. to release it to unmonitored consumers, the producer needs to convince regulators that it doesn’t constitute a threat to these rules. This could be due to inherent limitations of the technology (e.g. a genetically modified extra-juicy pineapple is not threatening) or because of restrictions that will be hard for an individual or small group to circumvent (e.g. the computer described above that only runs software that has been approved by law enforcement, and the defense looks solid). If I want to release heavy technology, I have to pay for the costs of the evaluation process to determine whether it is safe.
These evaluations could be organized hierarchically. At the “root” are very detailed and cautious evaluations that are expensive and carried out rarely. A root evaluation wouldn’t just approve a single object, it would specify a cheaper process that could be used to approve certain kinds of items. For example, I could propose a simpler process for evaluating new materials, and perform an extensive evaluation to convince regulators that this simple process is reasonably secure. Then rather than having an extensive evaluation when I want to release a new plastic, I can follow this much cheaper new-material-approval-process. There could be several levels of delegated evaluations.
Ideally we’d engage in red team exercises to probe enforcement mechanisms and ensure that they were adequate.These could apply to every step of the process, from finding creative ways to use light technology to cause unacceptable destruction, to beating the enforcement and security mechanisms around heavy technology, to making unsound proposals for relaxing restrictions on heavy technology. Red teams could be better-financed and organized than plausible criminal organizations and should have a much lower standard for success than “could have destroyed the whole world.”
This proposal has several advantages, relative to mass surveillance:
- It allows people to continue living unmonitored lives, doing all the things they were able to do in 2017. In particular, people are free to say what they want, organize politically, trade with each other, study whatever topics they want, and so on.
- It allows individuals to continue understanding and improving technology, without requiring a high degrees of secrecy about key technologies or radically slowing technological progress.
- It provides a (relatively) clear and limited mandate for surveillance—with appropriate laws, it would require significant overreach for this surveillance to e.g. have a decisive political effect.
- It probably allows releasing lots of heavy technology to consumers without much extra burden, by inserting safeguards that are secure against attackers with only light technology.
Relative to no surveillance, the advantage of this proposal is that it stops some random person from killing everyone. Realistically I think “don’t do anything” is not an option.
This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.
This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
This proposal puts things in the “dangerous” category by default and considers them safe only after argument. We could take a different default stance; I don’t have a strong view on this. In reality I expect that most particular domains will be governed by more specific norms that overrule any global default.
The cost of this proposal grows continuously as the amount of heavy technology increases, starting from a relatively modest level where only a few kinds of technology need to be monitored. Even once heavy technology is ubiquitous, this proposal is probably not much more expensive than mass surveillance and might be cheaper. Any kind of surveillance is made much cheaper by sophisticated AI, and this proposal is no different.
7 thoughts on “Surveil things, not people”
It seems to me that the world has already tried this and failed, in the form of nuclear non-proliferation. Nuclear weapons, which require much more setup than this hypothetical heavy technology would, have been developed by North Korea despite other countries trying their best to stop it. The only way that this could feasibly work is by having a single world government (or a small number of closely aligned governments that cooperate very well, which is just about as impossible).
On top of that, I’m not sure that this idea of perfect surveillance of things is possible. Security professionals do their best to stop viruses, and yet computers still routinely get hacked. I’d be extremely doubtful that the white hats would be able to outsmart the black hats, if the black hats only need one win and the white hats need to keep on winning.
It seems wrong to say that other countries tried their best to stop North Korea from developing nuclear weapons.
I agree that perfect surveillance (or more importantly security) probably isn’t possible, and this is not intended as a replacement for arms control amongst states.
Note that in the current world there is only a modest resource asymmetry between attackers and defenders. Under the proposed regime, defenders would routinely use heavy technology, while attackers would be restricted to light technology (until they had already successfully mounted an attack). As the gap grew, attackers would be at a larger and larger disadvantage. For example, defenders’ computers would be orders of magnitude faster, and defenders would have access to much stronger AI.
(It’s not that some people would be labeled “attackers” and so have fewer resources, its that defenders would mostly be leveraging resources that are monitored such that they can’t be used to attack the system.)
I think this perspective. I think the main concerns are:
0) Coordination between nations (but this is also a problem for naive mass surveillance proposals).
1) Technologies that are super hard to control the proliferation of (e.g. ideas). An example would be a hypothetical idea that allows anyone who is aware of it to write a FOOMy AGI in a small program.
2) A related concern is that scientific progress might bring us to a point where lots of new potentially destructive ideas are easy to imagine and implement.
3) There is always the possibility that defence is just too hard and creative attacks will be found. Totalitarian mass surveillance doesn’t appear to have this problem.
I agree that we may eventually reach the point where ideas alone are sufficient to cause unacceptable destruction. Surveilling things rather than people/ideas would only be adequate for a while. (I think it might be “good enough” for a while though, say 5+ doublings of the economy, giving plenty of time for people to adopt stronger systems as new threats arise.)