Search Blogs

Tuesday, December 5, 2023

Custodians of AI: Ferrari Turbocharger or Brembo Brakes?

Right before Thanksgiving, the AI world seemed to unravel with the bizarre events at OpenAI: unexpected removal of the CEO and then his abrupt return. From my perspective, this drama sparked a monumental debate among the online AI community on platforms like LinkedIn, Twitter, and Reddit, framing it as a colossal clash of titans over the progression towards artificial general intelligence. I want to provide a quick overview of my understanding into this intriguing subculture that's emerged around this controversy.

The Rise of the Accelerationist

The term "effective accelerationism", abbreviated online as e/acc, seems to have gained a lot of prominence in the last 6 months with the discussion around LLMs and artificial general intelligence. Initially, I correlated it with "effective altruism" – a concept centered on maximizing societal good1. However, effective accelerationism diverges from altruism in very specific way by proposing rapid technological advancements as the vehicle for societal betterment [1,2]2. This approach entails moving at swift, often unchecked, velocities towards technological advancement. In my view, this is the embodying essence of engineering under the guise of a grander philosophical vision for a Utopian society. Essentially, e/acc is nothing more than engineering infused with some deeper philosophical aspirations. At least this is my take on it.

The Decelerationist Stance

Contrasting the accelerationists are the decelerationists, or decels3, who seem to be primarily AI researchers concerned with AI system safety [3]. Their apprehension seems two-fold, but it is unclear to me which is the most concerning. First, the fear of autonomous AI systems gaining unpredictable agency and, secondly, the potential misuse of AI by humans, like using AI-generated text or images to spread damaging misinformation. The decelerationists advocate for a thorough, safety-first approach above all. Pretty sure this is the mindset of most Ph.D. scientist – ever-questioning, often sidetracked by hypotheticals, and meticulously slow in deriving scientific conclusions. This has great value though! Without this mindset you wouldn't have the computer your reading this on.

However, me casting this as a binary conflict is far too simplistic. The quest for technological advancement isn't a zero-sum game, if the technologist prevail we don't for certainty get AGI or super-intelligence that eliminates us. If AI safety experts, lobby for regulation that slows the pace of innovation, humanity doesn't become an island of despair. Effective accelerationists are just engineers and technologists who are visionaries eager to propel humanity into a future reminiscent of the Sci-Fi novels from their childhood. They probably pursue this with a sense of responsibility, not some mad scientist bent on inventing life, albeit tinged with excitement. Conversely, decelerationists, while cautious, are equally committed to leveraging technology for a brighter future. Their apprehensions often stem from observed disappointments from observing how platforms like TikTok can manipulate human behavior and interactions through algorithmic social engineering.

My "Genius" Thoughts, 🤪

Whats my takeaway from the e/acc and decel subculture? It's more than a simplistic 'us versus them' narrative. The online activities that label it as such are reductive and unproductive. It's a discourse rooted in good intentions, where decelerists aren't merely obstructionists, and accelerationists aren't recklessly charging forward without consideration. At its core, advances in technology are just about responsible engineering – a balanced approach to innovation and caution.

Footnotes


  1. I have a pretty strong bias against the effective alturism movement. It might be a "good" guiding principal, but in actuation, it will never work. 

  2. Although I used this reference I strongly diasgree with the practice of "doxxing" simply because the article author/editors viewed it as a "public good". Its interesting because I took a bit of personal offense because I'm very familiar with the quantum computing work by the person who was doxxed; he has some really nice quantum machine learning papers. But of course thats my problem. 

  3. I'm curious if this is a slight at this group of people, it seems, uncannily close to incels 

References

[1] E. Baker-White, Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?, Forbes. Article URL (accessed December 5, 2023).

[2] Effective accelerationism, Wikipedia. (2023). Wikipedia URL (accessed December 5, 2023).

[3] Pause Giant AI Experiments: An Open Letter, Future of Life Institute. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ (accessed December 5, 2023).


Reuse and Attribution

No comments:

Post a Comment

Please refrain from using ad hominem attacks, profanity, slander, or any similar sentiment in your comments. Let's keep the discussion respectful and constructive.