In Waking Up: A Guide to Spirituality without Religion, Sam Harris makes a statement that might, on first hearing, sound like mumbo jumbo, but upon reflection, tells a plain fact.
“Our minds are all we have. They are all we have ever had. And they are all we can offer others. This might not be obvious, especially when there are aspects of your life that seem in need of improvement—when your goals are unrealised, or you are struggling to find a career, or you have relationships that need repairing. But it’s the truth.”
If minds are all we have, the most disruptive technologies should be all about the mind. In some ways they are. Some of the biggest worries about the internet, social media, mobile, analytics, wearables and a range of technologies are in fact around how they are changing our minds. Most of these indirectly, and often unintentionally. But, now we are getting into a phase where different fields—chemistry, biology, math, computer science—collude, and target our minds directly.
- Drugs: For years, the scientific establishment didn’t pay attention to therapeutic use of hallucinogens, in part because of social stigma, and in part because the risks of abuse were real. Now, there is increasingly more interest in the field. It can effect long lasting change, and potentially enhance human performance. Nautilus surveys interesting developments in the field.
- Virtual Reality: As human beings trying to make sense of the world around us, we give too much importance to human beings’ intent, motives and character, and too little to the situation, to the context. In some cases, not knowing the context might make us less empathetic. But, empathy is key to solving not only people-related issues, but also in areas such as product design and development. And now, a company called Mursion promises to make us more empathetic, by simulating both the characteristics of a person and the situation. Here is Fast Company profile of the company.
- Brain Computer Interface: A couple of years ago Facebook’s Mark Zuckerberg said the company was working on a system that would help you type five times faster. Like Elon Musk he sees human beings facing a bandwidth problem—a super capable brain, constrained by body. Building a brain-computer interface can solve that problem. Zuckerberg expounded on some of those fascinating and frightening ideas with Harvard Law School professor Jonathan Zittrain.
Our understanding of privacy is changing fast. The debate around the etiquette of a) replying vs b) quote tweeting on Twitter highlights the issue. The former apparently means engaging with another Twitter user, and the latter “broadcasting” your views to your audience. That tweeting something as a reply is not considered broadcasting is an indicator of the huge shift.
For many years, we have been worrying if Facebook wants to take our privacy seriously. The assumption is that it can if it wants to. So, it’s a question of persuading it through social, political and legal pressure. Now, especially after it emerged that Facebook has been storing over 600 million passwords as plain text, we should worry if the company can take our privacy seriously. [Change your Facebook password, if you haven’t already]
Our user agreement with Facebook might read differently, but the deal that many of us made with Facebook is, “take my data, and give me your digital goodies”. The problem with this deal—which has been very good for Facebook—was that we had no idea what data Facebook has been taking, and we have no idea what those digital goodies are. What if those two are defined well enough? Will it make any difference? In Factor Daily, Nilesh Christopher tells us what has been happening with Account Aggregators, or consent brokers. In this model, users have a clear idea of what they give and what they get. But, on the ground, it gets more complex than that.
If at some point in the future, technology gets better at making decisions about fairly complex issues, will we get technology to do it for us? If artificial intelligence can do the work of a jury better than humans—process all the information during the trial, be free of bias, analyse the information to arrive at the correct decision—will we outsource the judicial process to technology? What about laws themselves? Will we input the values we believe in to a system and let it draft the laws that are fair and just? Paula Boddington, author of Towards a Code of Ethics for Artificial Intelligence, explores questions like that in her essay Moral Technology.
One of the many problems with the present day policy design and law making—democracy itself—is that it’s too concerned about resolving conflicts among those who are living today. We don’t care much about the future generation. Roman Krznaric argues that democracy should be reinvented for the future. He points out to two initiatives that are interesting.
- Japan-based Future Design, led by economist Tatsuyoshi Saijo, organises assemblies where some citizens are asked to represent the future generation.
- US-based Our Children’s Trust pushes the idea of clean environment as a legal right for the future generations.
- The latest Google product to be sent to the cemetery: Inbox.
- A new prototype ultra-thin graphene-based light absorbent film can heat up to 160 degree Celsius in seconds.
- SoftBank invests $413 million in Delhivery, valuing the startup at $2 billion, giving it an entry into India’s unicorn club.
- How YouTube conquered India, where 95% of video consumption is in local languages.
- Stanford launched The Institute for Human-Centered Artificial Intelligence. But, it has a problem.
Stanford just launched their Institute for Human-Centered Artificial Intelligence (@StanfordHAI) with great fanfare. The mission: "The creators and designers of AI must be broadly representative of humanity."— Chad Loder ? (@chadloder) March 21, 2019
121 faculty members listed.
Not a single faculty member is Black. pic.twitter.com/znCU6zAxui