The imbalance of power in the AI game: in search of the common good

By    John Garner on  Friday, June 16, 2023
Summary: The article discusses the contrasting debate on how AI safety is and should be managed, its impact on technical debt, and its societal implications. It notes the Center for AI Safety's call for a worldwide focus on the risks of AI, and Meredith Whittaker's criticism that such warnings preserve the status quo, strengthening tech giants' dominance. The piece also highlights AI's potential to decrease societal and technical debt by making software production cheaper, simpler, and resulting in far more innovation. It provides examples of cost-effective open-source models that perform well and emphasizes the rapid pace of AI innovation. Last, the article emphasises the need for adaptive legislation to match the pace of AI innovation, empowering suitable government entities for oversight, defining appropriate scopes for legislation and regulation, addressing ethical issues and biases in AI, and promoting public engagement in AI regulatory decisions.

In the rapidly transforming landscape and discourse about artificial intelligence (AI) the topic of safety has intensified, fostering heated debates among tech executives, academics, and industry analysts. Meredith Whittaker challenges the narrative, stressing the need to question and regulate the status quo. Others discuss the promising impacts of AI on society's technical debt.
The burgeoning growth of open source AI models has added another layer of complexity into the conversation. Amidst this whirlwind, there is an obvious need for careful and timely decision-making that addresses immediate concerns, that balances power, defines regulatory scope, and fosters public engagement while embracing this amazing wave of AI innovation.

The Center for AI Safety presented a statement from hundreds of executives and academics. It was shocking, on purpose and as we'll see, both by design and with likely hidden agendas from many on that list. Not sure it is so surprising, but deciphering the probable motivations is intriguing.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” Center for AI Safety

However, other prominent figures in the industry, like Meredith Whittaker, are asking us to put this into context:

"Geoff’s warnings are much more convenient, because they project everything into the far future so they leave the status quo untouched. And if the status quo is untouched you’re going to see these companies and their systems further entrench their dominance such that it becomes impossible to regulate. This is not an inconvenient narrative at all. [...] I don’t think they’re good faith. These are the people who could actually pause it if they wanted to." The Guardian "These are the people who could actually pause AI if they wanted to"

I like to associate this topic with how AI may have a positive impact on society's technical debt. In an article from SK Ventures, they describe a different possible impact of AI on society's technical debt. Observers, including McKinsey (in their recent AI report) and many hedge funds, point to today's software market as a reference point for their calculations of a massive increase in revenue for Tech Giants, large software companies, and their clients alike.

"Marketing and sales: Boosting personalization, content creation, and sales productivity"

– McKinsey: "The economic potential of generative AI: The next productivity frontier"

The economic potential of generative AI The next productivity frontier

However, these analysts overlook the upcoming (potential) reduction of societal and technical debt. A scenario where AI leads to a decline in software production costs and complexity, potentially triggering innovation, enabling the technology industry to create more value than it captures.

SK Ventures outlines two main points on how AI might have an even greater positive impact:

"1. Every wave of technological innovation has been unleashed by something costly becoming cheap enough to waste.
2. Software production has been too complex and expensive for too long, which has caused us to under-produce software for decades, resulting in immense, society-wide technical debt.
This technical debt is about to contract in a dramatic, economy-wide fashion as the cost and complexity of software production collapses, releasing a wave of innovation.”

Next Collapsing Tech Cost is Software Itself
Next Collapsing Technical Cost is Software Itself via SKVentures

SK Ventures adds: "We do think, however, that software cannot reach its fullest potential without escaping the shackles of the software industry, with its high costs, and, yes, relatively low productivity."

A glance at the Lymsy.org Chatbot Arena Leaderboard shows that an open source model is in the top 5 for their week 5 Leaderboard. A recent paper about Microsoft Research's Orca model challenges the reference GPT4 model. We expect this to be open sourced shortly. These are examples of how the open source world can create models that score well on recognised performance tests.

Lymsy.org Chatbot Arena Leaderboard 25 05 2023
Lymsy.org Chatbot Arena Leaderboard 25 05 2023

It is also amazing to see how quickly we've jumped from LLaMA-13B to Alpaca-13B and now Vicuna-13B, in a matter of weeks, that achieves 92% as good results as ChatGPT4.

We are seeing open source LLM model training costs becoming extremely low compared to the massive cost of proprietary models, even though on tests like Lymsys, the Vicuna model is beating the current version of PaLM2 (calculating Google's PaLM training costs).

LLM training costs Open Source versus Proprietary

Estimated Vicuna 13B (params) Model training costs

$100.00

Estimated Google PaLM 540B (Params) training costs

$10,000,000.00

New AI based tools based on both open source and proprietary models appear every day. It shows how much creative energy is going into innovation from both sides.

It's hard to keep up with what is going on in the world of AI at the moment. There is so much activity that AI influencers have shifted from announcing the newest AI tools once a week to publishing lists of AI based tools more than twice a week and sometimes daily on a good week. There is that much going on!
And it's not a small app that records both you and the person you are talking to and makes notes for you. It's the likes of Microsoft Research with an open source LLM Orca that is clocking in at 13B params. While Meta has open sourced its Music making app "MusicGen". However, you may have been taken aback by the doomsday claims from well-known AI figures who have worked on components of what we are now witnessing and using.

It feels like the civilisation ending warning is an attempt to avoid talking about real issues that if dealt with, would be beneficial for society. This does not mean stopping the innovative streak we are seeing with AI from both the Open Source and Proprietary LLMs and all the companies creating some amazing tools around them.
We do need to be conscious of the impact on society and, as Meredith Whittaker suggests, if prominent figures feel there are issues, now is the time to make decisions that will protect society and companies alike. The key issues are not civilisation ending, they are occurring now, will occur shortly and could create real havoc in societies around the world and may well result in chaos if we do nothing.

A. Speed of change & future proofing legislation
Legislation needs to take note of the velocity at which innovation and changes are taking place. Both governments and companies alike need to act now and provide proper regulation and guidance for the industry to follow and for us to protect society from bad actors and other issues. This point is crucial, as legislation often lags behind technological advancement. Legislation needs to be flexible and adaptable, to account for the rapid evolution of AI technologies as per point C. It is important for policymakers to remain up to date and take the initiative to gain knowledge about potential AI applications and their effects on society. This should include fostering a culture of cooperation and knowledge exchange with tech (and relevant) industry experts & leaders, AI researchers, and ethicists. We should not forget to include the public either. We could even use AI to help manage this.
B. Legislative Power: Giving Power and Support to the right Government entities
Legislation as per point C is only as good as the power and scope of intervention given to the organisations and government entities being asked to police the industry and ensuring these bodies have the expertise to understand and make informed decisions about the technology. As mentioned in point A, it would be interesting to establish independent, multi-stakeholder bodies. The involvement from academia, ethicists, civil society, the tech industry and, maybe, philosophers can give us balanced perspectives and specialized knowledge in such institutions.
C. Defining the scope of Legislation and Regulation of A & B
It's important to have the power to apply rules and regulation, fine and adapt quickly to changing needs but it is also important to define legislation and regulations that have proper scope now but also when AI is used in unexpected ways and as well as modern takes on well-known issues and illegal activities. It is crucial to protect individuals' rights in the context of AI. In order to protect privacy, freedom of expression, and non-discrimination, what steps can be taken in a world that is increasingly using AI?
It is important to remember that the scope of regulation should be able to accommodate the global nature of AI technology, taking into consideration international standards and agreements.
D. Ethical & Bias Considerations
While it is implicit in the previous points, it might be useful to explicitly address the ethical implications of AI. Legislation should guide the ethical use of AI, addressing issues like fairness, transparency, accountability, and the prevention of harm. Preconceived notions can be implicit in AI systems, resulting in partiality or unjust outcomes when the systems are used in real-world decision-making procedures. Biases in AI have been shown to affect decisions related to recruitment, lending, and law enforcement. Legislation should therefore mandate regular audits of AI systems to detect and mitigate any biases.
E. Public Engagement
The public should have a say in how AI is regulated, given that these technologies have wide-reaching societal implications. AI technologies have the potential to affect everyone, so it's essential that all segments of society can voice their concerns and perspectives. As I advise companies to increase trust and understanding of AI by being transparent while educating and supporting employees, the same should take place with the public. Consultation also needs to cater to underrepresented groups. Having been educated in both England and France, I would advocate for public consultations and debates on AI, as well as educational initiatives. Public opinion and feeling can be a great source of direction for policy makers, helping them to determine which aspects of AI are the most concerning for people and therefore where regulation should be most focused.


Further Reading

These are the seven key points that are discussed in the article, shortened and summarised here. Check out the full article for more details:

  1. Confidentiality and security of data
    - Companies must establish strict data security measures to protect sensitive data from unauthorized access, disclosure, or misuse.
  2. Invest in eliminating bias
    - Companies must ensure that their AI systems are not biased by using unbiased data, auditing algorithms, and incorporating a diverse team of engineers and developers.
  3. Sufficient training, information & support
    - Companies must provide appropriate training and support to their employees to ensure that they can use AI systems effectively and ethically.
  4. Prioritizing best use cases for AI
    - Companies must identify the best use cases for AI and integrate AI systems into their workflows in a way that is efficient and effective.
  5. AI systems: only as good as the data they are trained on
    - Companies must use high-quality data that applies to the task at hand, accurate, reliable and relevant to the industry, and audit it frequently.
  6. Replace workforce carefully & intelligently
    - Companies must be careful not to replace too much of their workforce with AI, as this can lead to job displacement and other negative consequences.
  7. Ensure accuracy of AI output
    - AI systems must be accurate and transparent in their decision-making, as the impact of inaccurate or inappropriate decisions could be significant. Avoid hallucinations.
Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Sunday, September 24, 2023
The reliability & accuracy of GenAI

I question the reliability and accuracy of Generative AI (GenAI) in enterprise scenarios, particularly when faced with adversarial questions, highlighting that current Large Language Models (LLMs) may be data-rich but lack in reasoning and causality. I would call for a more balanced approach to AI adoption in cases of assisting users, requiring supervision, and the need for better LLM models that can be trusted, learn, and reason.

Read More
Saturday, September 23, 2023
From Chatbots to Reducing Society's Technical Debt

I discuss my experience with chatbots, contrasting older rules-based systems with newer GenAI (General Artificial Intelligence) chatbots. We cannot dismiss the creative capabilities of GenAI-based chatbots, but these systems lack reliability, especially in customer-facing applications, and improvements in the way AI is structured could lead to a "software renaissance," potentially reducing society's technical debt.

Read More
Friday, June 16, 2023
The imbalance of power in the AI game: in search of the common good

The article discusses the contrasting debate on how AI safety is and should be managed, its impact on technical debt, and its societal implications.
It notes the Center for AI Safety's call for a worldwide focus on the risks of AI, and Meredith Whittaker's criticism that such warnings preserve the status quo, strengthening tech giants' dominance. The piece also highlights AI's potential to decrease societal and technical debt by making software production cheaper, simpler, and resulting in far more innovation. It provides examples of cost-effective open-source models that perform well and emphasizes the rapid pace of AI innovation. Last, the article emphasises the need for adaptive legislation to match the pace of AI innovation, empowering suitable government entities for oversight, defining appropriate scopes for legislation and regulation, addressing ethical issues and biases in AI, and promoting public engagement in AI regulatory decisions.

Read More
Thursday, June 1, 2023
Japan revises copyright laws for AI

Japan has made its ruling on the situation between Content creators and Businesses. Japanese companies that use AI have the freedom to use content for training purposes without the burden of copyright laws. This news about the copyright laws in Japan reported over at Technomancers is seen as Businesses: 1 / Content Creators: 0 The […]

Read More
crossmenuarrow-down