Facebook are currently disputing the idea that they listen in on real-world conversations through over-reaching it’s permissions to access the microphone. Yes – users want the apps to access the microphone when actively recording video or using audio – but not when having a private conversation with someone in the ‘real world’ – with the phone off.
The biggest issue I have with this is the denial. They explicitly denied listening via the core Facebook app; and also their messenger app. However, we’re all using a variety of additional apps curated and managed by Facebook.
- Whatsapp. (need microphone for audio/video calls)
- Instagram. (needs microphone for video)
It’s not practicable to manage access or revoke access based on individual use. What’s required here is for iOS/Android to provide better UX around which apps are accessing which features each time they’re accessed. In the same way we have the battery monitor, the OS providers should be providing an audit log of what exactly your phone has been doing.
I’m not sure what privacy campaigners are calling it yet – I need to have a read up and familiarise myself. I’d call it overreach. We should be protected from such overreach; and the OS creators need to provide better tools by default, rather than requiring rooting and/or technological expertise to understand what a device that you’re paying for and is with you for practically 24 hours a day is doing with your data.
One of the key paradoxes about freedom is the freedom to do both right and wrong. We balance freedom with laws that restrict certain freedoms by providing limits on what society regards as acceptable. In a completely open model, those freedoms can be used for both good and evil. Despite being of a liberal mindset, I also understand that we must take responsibility for our own actions, which may mean balancing our own freedoms against our responsibilities to our fellow man.
With the announcement today that Facebook, Google & Microsoft are teaming up to block extremist content on the internet, should we be concerned that such technology could also be used to restrict access to any content? If Donald Trump doesn’t like a particular news article about him, can he get it blocked?
One of the most clearly concerning episodes from the recent US election is that conversation is being channeled into separate media – rather than having a balanced debate on a platform, echo chambers on both sides of the argument are being setup to polarise opinion. Twitter is blocking the ‘alt-right’ and fascist behaviours, only for that community to move over to a new platform, gab.ai. The liberal left are unlikely to want to join the platform, so the problem is exacerbated.
Does ‘Freedom of speech’ on the internet require a new paradigm? Whilst media censorship has been an accepted means of protecting the liberty of groups and individuals in protecting against vile, fascist and immoral diatribes from groups that make the majority of us uncomfortable – does the communication revolution of the internet mean that we can no longer use these old methods to protect our citizenry?
I’m unclear as to whether there’s an answer, or whether the answer is unacceptable to our traditional values. We cannot create a ‘clean’ internet without gagging certain views, and in doing so we’re giving the control to whom? The technology companies are immature as far as moral and ethical codes go, so maybe we need to lean on other organisations that have more experience in ethical/moral frameworks to help shape the better model of intervention?