Ik heb als onderzoeker en docent bij de opleiding System and Network Engineering van de Universiteit van Amsterdam een serie blogs geschreven. In deze serie belicht ik in opdracht van het Rathenau Instituut ethische vraagstukken bij data-onderzoeken. In deze bijdrage ga ik dieper in op dataverzameling door smartphone apps. Niet alleen de app zelf maar ook het feit of je telefoon op Android of Apple’s iOS draait is van invloed op hoe er met je gegevens wordt omgegaan. Ook gepubliceerd op het Data denkers blog. Deze blogpost is beschikbaar onder CC-BY. Continue reading
The Internet is a complicated infrastructure, which is taking over our lives. Users should have some understanding of how this works, so that they better understand regulation or commercial impact of (new) measures. Most articles trying to explain the Internet make things very complex, and they don’t need to. There are only two concepts that you need to know to understand networking: layers and adaptations. Once you understand these concepts, you can fit them together to form a full networking stack. This explains how we do networking on the Internet, but also on other current, and future networks.
Last year has seen some outcries over privacy breaches through NSA spying. We see more problems with pervasive monitoring and privacy through ad networks, Google tracking you, and soon possibly in your own home. This week we have seen that morality on the Internet goes beyond just privacy; there has been an outcry over the morality of Facebook manipulating news feeds for science. We need to look further than just privacy, and develop a new morality for the online space.
At the National Cyber Security Center One Conference last week, Chris van ‘t Hof interviewed me in his TekTok studio. We briefly talked about the Ethical Committee at the University of Amsterdam’s System and Network Engineering master, about Responsible Disclosure and why this is a bad term.
The original specification of DNSSEC is from 1997: RFC 2065. This means that it is now over 17 years ago since its initial appearance. Sure, it has a turbulent history, and has undergone some big changes. Even the ‘final’ specification (RFC 4033) is over 9 years old. Yet I am going to argue that it has failed.
Cory Doctorow argues that security engineering should be public, like public health:
I think there’s a good case to be made for security as an exercise in public health. It sounds weird at first, but the parallels are fascinating and deep and instructive.
Last year, when I finished that talk in Seattle, a talk about all the ways that insecure computers put us all at risk, a woman in the audience put up her hand and said, “Well, you’ve scared the hell out of me. Now what do I do? How do I make my computers secure?”
And I had to answer: “You can’t. No one of us can. I was a systems administrator 15 years ago. That means that I’m barely qualified to plug in a WiFi router today. I can’t make my devices secure and neither can you. Not when our governments are buying up information about flaws in our computers and weaponising them as part of their crime-fighting and anti-terrorism strategies. Not when it is illegal to tell people if there are flaws in their computers, where such a disclosure might compromise someone’s anti-copying strategy.
I agree that security these days is harder than ever. The Internet has become a hostile environment and there are many actors actively trying to break anything connected to it.
Public health is a service because it is in everybody’s general interest, and there is not much else we can do about it. Making security a public service creates exactly the wrong kind of incentive. Companies release broken products, and rely on consumers not knowing or caring about it. We have to create more awareness and public outrage, so that consumers actually care about this and can make an informed decision.
Informing the public about security related issues, now that I can agree with as a public service.