The Dark Side of the Techno-Utopian Dream

What could go wrong with Vinod Khosla’s techno-optimistic vision of the future

The Dark Side of the Techno-Utopian Dream

Speaking at TED in Vancouver last week (April 2024), Venture investor Vinod Khosla gave a talk titled “Vision for 2035-2049“. He described a techno-utopian future powered by advances in AI, robotics, and other emerging technologies. While the timelines might be a bit off (who knows, really) some of trends he outlined have already started today and it’s easy to imagine they can quickly scale.

It’s easy to be uplifted from the promise of the rapid rise of AI/robotics: every person with access to the Internet will be able to get access to the best doctor/tutor, an explosion of creativity with generative AI will see the rise of personalised entertainment, labor will be near free as humanoid robots can work 24/7 and replace many human tasks… and so-on. It’s a vision that is easy to get behind. But unfortunately, we have to take into account that things can also go wrong, very wrong.

The Dark Side of the Techno-Utopian Dream

Below is a ‘warning list’ of 10 threats caused by scaling the future advancements in AI, Robotics and other tech trends.

We’re starting to see mass layoffs take place, where companies shed 10% of their workforce. When it happens in tech, the employees affected are likely to find new jobs in the industry, but what about less skilled workers? There are 3.5 million truck drivers in the US alone. What could the impact of fully autonomous cars on them?

The US Airforce successfully conducted a Dogfight between an autonomously controlled fighter jet and a human pilot last year.

As we move rapidly towards ‘smart cities’ the result could be quickly turned into surveillance states. Project ‘Sfera’ is a recent example from Russia.

When Google released the first version of its ‘Gemini’ LLM model, it became clear how the bias of the makers of a foundational model is reflected in its results. It was basically a reflection of ‘woke’ culture. Similarly, if you ask ChatGPT to tell a Jewish joke, and it will do so. Ask ChatGPT to tell a Muslim joke, and the result is “Humor is a wonderful way to bring people together, but it’s important to do so respectfully. I aim to be inclusive and considerate, avoiding jokes that may inadvertently perpetuate stereotypes or offend cultural sensitivities. How about I tell you a general, friendly joke instead?”

When deepfake Tom Cruise came out, the power of creating high fidelity deep fakes was concentrated in a small number of models and held by companies that largely adhered to safety principles. But the tech for video creation is moving fast, and more models are widely available enabling voice and video cloning from a single portrait image. What happens when we can no longer believe what we see on social media (or the news) unless it is verified?

170 million people use TikTok in the US alone, that’s about 50% of the population, according to the New York Times. What makes something go viral on TikTok (like Bin Laden’s ‘letter to America’ did recently), is still a black box.

We are excited about the potential of AI Agents, but sometimes it means that things get out of hand. In an interview with CBS’ 60 Minutes, Google tech exec James Manyika admitted that the company’s AI had somehow learned a language on which it had not been trained.

Last year there were a lot of voices about the need to ‘slow down’ development of ‘G’d like’ AI (or basically, the pursuit of AGI). But those voices seem to have changed into a race between OpenAI, Google and Amazon, among others, on who can get to AGI sooner.

The Chip Wars between the US and China is an example of AI inequality, but so is the prevalent use of English as the main language to access LLM technology these days.

Eliezer Yudkowsky was named one of Time’s 100 most influential people in AI last year. Some people call him a scaremonger, but he’s one of the leading voices warning from an AI apocalypse. Stephen Hawking famously warned about the dangers of AI: It will either be the best thing that’s ever happened to humanity, or it will be the worst thing. If we’re not careful, it very well may be the last thing

It’s hard to tell whether we will be able to avoid these risks as we continue to rush into developing and deploying AI across every field. On the bright side, there’s at least awareness to these risks by governments, but will they know to strike the balance between innovation, progress and regulation? Time will tell.

I’ll finish with another TED talk takeaway on the existential threat of AI, this time from Mustafa Suleyman, the co-founder of Deepmind, founder of Inflection, and now CEO of consumer AI at Microsoft. Can we avoid the 3 conditions that make AI very dangerous in the next 5-10 years? I’m not sure.

Follow me
Co Founder and Managing Partner at Remagine Ventures
Eze is managing partner of Remagine Ventures, a seed fund investing in ambitious founders at the intersection of tech, entertainment, gaming and commerce with a spotlight on Israel.

I'm a former general partner at google ventures, head of Google for Entrepreneurs in Europe and founding head of Campus London, Google's first physical hub for startups.

I'm also the founder of Techbikers, a non-profit bringing together the startup ecosystem on cycling challenges in support of Room to Read. Since inception in 2012 we've built 11 schools and 50 libraries in the developing world.
Eze Vidra
Follow me
Total
0
Shares
Previous Article

Keeping it Real: The Struggle for Objectivity in Tech Reviews

Next Article
gaming (investments) are back in Q1 2024

Gaming Investments Level Up: Early Signs of a Resurgence

Total
0
Share