Welcome to This Week in Disruptive Tech, a weekly column and newsletter that focuses on the intersection between tech and society. If you like it, please do share it with your friends and colleagues. If you have any feedback or comments, please add to the Comments section below. If you haven't subscribed already, you can subscribe here. It will hit your inbox every Wednesday.
Take a look at the following images and screenshots.
1. These three logos are designed by Nikolay Ironov, who worked for Russian firm Art. Lebedev Studio, for its clients
2. The screenshot is of an article by Manuel Araoz, co-founder and ex-CTO at OpenZeppelin. It’s about experiments with the latest version of OpenAI’s GPT, a natural language generator.
3. An opinion column written by Oliver Taylor, who “has an M.A.in political science and works as a freelance writer in the UK with a focus on the Middle East.”
Now, let’s take a deeper look into each of these.
- Nikolay Ironov’s public portfolio has more designs. Apparently, the firm’s clients are happy with them. Only, Nikolay Ironov is not a real person. It’s a AI-driven computer programme that generates designs.
- Manuel Araoz says one of his goals “is to bring new experiences to people through technology.” Unlike Nikolay Ironov, Araoz is a real person. But the article on GPT3 is not written by him. It’s written by GPT3. Even its earlier version GPT2 wowed those who tried it. But, GPT3, generates text (and other output) based on 175 billion parameters. The results are even more impressive. It will be available for commercial use later this year.
- No one knows who Oliver Taylor is. The smiling face you see next to his byline? Reuters has found it to be a deepfake, an image or a video altered or generated using AI. In one of his editorials, Oliver Taylor accused two human rights activists of having links to terrorists.
It might be unfair to put all these examples in a single bucket. But, they all underscore one important point about AI. There are many debates around what’s AI, and if some companies are simply riding on the wave to pass off basic correlation programmes—or even manual labour behind the screen—as AI. But, all these examples show that irrespective of how we define it, it’s already a part of our social and economic lives. Clients pay for it. Businesses are thinking about use cases. And many are worried about its role in misinformation and propaganda—and what it might do to some of the values such as democracy and freedom. It explains why many are conflicted about AI.
All these only highlight the importance of getting our ethics right about the new technologies. The good news is that there is interesting work going on in that area. Recently, Omidyar launched its ethics explorer, which offers some interesting ways to look at the problem. The sheet has over 20 other frameworks for ethics in tech.
The big Twitter Hack and Google’s Confidential VM
Details are still emerging about how exactly hackers took over the accounts of business and political leaders including Bill Gates and Joe Biden, but it involved extensive social engineering. It highlighted, yet again, that human beings are probably the weakest link in online security.
Meanwhile, Google launched its Confidential Computing service called Confidential VM. Technology has mostly addressed concerns around keeping data safe while it’s stored in machines or during transit through encryption. However, when it’s being processed, it’s open to security breaches. It’s one of the reasons why companies have been reluctant to adopt cloud. Confidential Computing promises to address that. Here’s a good explainer of Cloud Computing.
Solving for social engineering will be tougher.
The long road to an effective vaccine
In the recent past, two pieces of news about vaccines were met with celebration. Moderna announced that everyone who got its vaccine developed antibodies. Same with another vaccine candidate being developed by Oxford University and AstraZeneca. There are many other candidates in different stages of development.
However, there is a long road ahead. In an informative conversation moderated by Siddhartha Mukherjee, experts highlight some of the challenges ahead. One of them is phase 3 of clinical trials, in which thousands of volunteers are tested with the vaccine against a placebo. A challenge might be needed to see if it really works against an infection in the real world—and it’s rife with ethical problems. But, even after all these hurdles are crossed, there is manufacturing and later logistics.
Here’s a perspective from Margaret (Peggy) Hamburg, who was commissioner of the US Food and Drug Administration from 2009 to 2015.
“Manufacturing has to be done in a high-quality and consistent way. There are materials that are needed that can be in limited supply, like the vials and the stoppers that you need for packaging. And then there are chains for distribution, and sometimes vaccines have to be kept frozen at very low temperatures. So you have to have all of those important systems for manufacturing, packaging and delivery and distribution up and running and the supply chains flowing in order to actually get what might now be an approved vaccine actually into the bodies of the individuals who need it.”