Software Development Regulations
I'm currently finishing Robots and Empire of Isaac Asimov, the last book in his Robot series. Through the several centuries that the stories span, fundamental robotic laws dictate every robot's creation. Let's see how this compares to our current software and Artificial Intelligence endeavours.
Laws are important. They are the result of our evolving society. They are not perfect, but if a democracy creates them, they will be for the greater good of that population.
We currently have few or no laws concerning software development. We are not bound by any authority that dictates the boundaries of how we should or should not program. Compare this with a building. There are regulations for every aspect of its construction.
The lack of control causes significant problems with our software built world. Our constructions are full of crack and leaks. Imagine what will happen when hackers influence Deep Learning systems with a model that knows everything about you?
In the Robots series, when they invented the Positronic Brain, they build everything on three laws:
- A robot may not injure a human being or cause a human being to come to harm through inaction.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its existence as long as it does not conflict with the First or Second Law.
These laws are too vague to be implementable in software. In the books, the robots themselves come up with a zeroth law: A robot may not harm humanity or cause humanity to come to harm through inaction. Still, I think this confers something essential: We should have boundaries for the software we create.
I would use these laws as a baseline to create regulations with implementable and observable rules. Will it be possible? Will it be too restrictive? Who will enforce it? There are many questions regarding the practice of software regulation, but we should answer these questions before it's too late.