Note: Welcome to This Week in Disruptive Tech, a weekly column and newsletter that focuses on the intersection between tech and society. If you like it, please do share it with your friends and colleagues. If you have any feedback or comments, please add to the Comments section below. If you haven't subscribed already, you can subscribe here. It will hit your inbox every Wednesday sharp at 7 AM.
GoSocial and the Art of Decentralisation
GoSocial, a social media startup, was in the news recently after Anand Mahindra invested $1 million in the company. He tweeted “Took 2 yrs, but I finally found the startup I was looking for.”
GoSocial is trying to solve an interesting problem. We spend a lot of time online, reading tweets and Facebook posts, looking at photos on Instagram, watching videos on TikTok. However, only a small percentage of all that content is paid for.
As one of its five co-founders, Rajat Dangi, points out in a Medium post, "content creators are among those who create the highest value on the internet. And the problem is, it is really hard for them to capture a fair fraction from that monetary value.”
GoSocial allows expert creators to host challenges or contests, and users take them up.
- A recent contest asks users to share photos on the unlock theme, “images that show the subtle yet big changes in your surrounding”.
- Another challenges users to create tile art taking inspiration from the work of Sumedha Pokhriyal
“We are gradually building tools for experts to monetize their expertise and content. There are several ideas and concepts at play here, vertical communities, P2P exchange of value, and a sense of personal growth,” Dangi wrote.
One might be tempted to see GoSocial just as a social media company for artists, photographers and creators. But, in fact, it’s a part of a bigger trend towards decentralisation in media. If their model works for photography and tile art, there is no reason why it shouldn't work for journalism.
You want to source a photo of a key event from a person closest to the ground, or get the facts of an important story re-checked by a third party, or want an expert to write an explainer of a complex topic? We depended on media organisations to do the job. In the future it could get much more distributed.
Can law fix what facial recognition technology breaks?
Earlier this month, IBM announced that it was getting out of facial recognition technology businesses. Soon after that Amazon and Microsoft announced a one-year moratorium on sales of the tech to law enforcement agencies to give policymakers time to come up with laws and regulations.
There has been a long-standing conflict between activists and businesses around this technology. Activists have pointed to its biases (it’s not good at recognizing non-white, non-male faces), privacy concerns and potential misuse, especially by law enforcement agencies. Businesses mostly responded by pushing even harder, partly because there was money to be made, and also because they believed the technology would get better and concerns would be allayed.
Will these conflicts go away if Amazon and Microsoft wait for laws to be passed? Some believe that law cannot fix what’s broken by technology. It is true in some cases. What a technology promises to do, it must do. However, we must remember that technology comes to life in society, and the problems that emerge out of those interactions cannot be solved by that technology alone. They must be solved through other devices—other pieces of technology, law, and institutions. The biggest question of our times is whether these checks and balances can keep pace with the scale and far-reaching impact of the new new technology.
Deepfakes and small aims
Facebook recently announced the winner of its Deepfake Detection Challenge. Deepfakes use machine learning and AI to manipulate images and videos. The winner managed to get an accuracy rate of 65%. Better than tossing a coin, but hardly impressive.
Or, maybe, it is impressive. As we saw in one of the earlier editions, methods such as GAN makes it difficult for models to find out. Generative adversarial networks, or GANs work by getting two neural networks, giving them both training data, asking one to generate a fake work, and the other to spot the fake, and putting them on a loop till it becomes impossible to identify the fake.