AI promises: the good, the bad, the ugly

By    John Garner on  Monday, May 8, 2023
Summary: Looking at the current condition and possibilities of AI and AGI, emphasizing the rapid progress, benefits, and potential risks linked to their development. AI tools are already driving productivity gains in various industries. We look at applications ranging from farming to law. However, concerns about the security, accuracy, and ethical implications of these technologies persist. Some experts, like Dr. Geoffrey Hinton, are advocating for stricter regulation and caution in AI development.

I'm frequently stumbling across articles and videos alluding to us seeing glimpses of AGI (Artificial General Intelligence) in Artificial Intelligence tools. This, along with them describing tools like ChatGPT as super-intelligent, is hyperbole. The icing on the cake is they are getting journalists etc. to believe they are interacting with supreme beings. I guess it would tempt many in the know, not to counter such myths, faced with a pot of gold, this frenzy of investors waiting to put their money into anything that is AI related.


This fascinating NYTimes article about training GPT tools does a great job of explaining the reality of the predictive aspect of tools like ChatGPT and the stages of training required to go from gobbledygook to Jane Austen. Well something that sounds like Jane. The article explains how the statistical success at predicting the best use of words depends on the data source quality and quantity. It's like a very complex and clever autocomplete tool.
You may have read how much it cost to get to the current quality level of output for OpenAI, the processing power, electricity (think crypto levels) and the astronomical amounts of data. Without even getting into the ownership and copyright aspects of it, the numbers across the board are immense. However, as the costs of genome sequencing slowly declined over the years, these AI training costs are also dropping. We're even seeing models being trained on other pre-trained models to speed up the entire process.

Initial research points to AI helping companies in key areas. According to recent research at Standford Digital Lab AI tools are providing great gains in productivity, providing companies with a levelling up function, whereby more junior or less experienced employees see an increase in productivity, speed up their learning curve, improve customer sentiment and an underlying trend of reduction in employee turnover.

Accenture research has identified tasks in occupations across a multitude of industries, from banking to natural resources that risk being affected by AI. Their study, "A new era of generative AI for everyone", provides a wealth of information and ideas about the potential impact of AI both in its current form and on our road towards AGI. As you can see below, a diagram from the above study (PDF), based on their research, Accenture foresees that even manual intensive industries like Natural Resources would see at least 20% of the workforce affected by automation compared to at least 54% affected in the Banking industry.

Generative AI will transform work across industries
Generative AI will transform work across industries: Accenture

Is it a revolution or simply an evolution? When you see what the combination of robotics and AI can achieve in the below video, you get a better understanding of the impact of the two on farming, for example, an industry where the picking of fruits would seem out of reach.
"It's not a revolution, it's a transition. If you look at it now, the difference now between a modern farm and a farm 150 years ago is huge. Well, that didn't happen overnight. We're just at the start of really applying this kind of idea of intelligent robotics to real world tangible problems and it's a really exciting journey we're all on."

Still in farming we can see here how AI is capable of drastically reducing the amount of chemicals required by turbo charging the robots ability to identify bugs and other issues plants have and allowing for extreme accuracy in addressing each issue across multiple fields of crops. The result: a reduction of 95% of chemicals required.

There are extremely positive and beneficial points coming from AI and the multiverse of services, with new ideas and actual services appearing every other day. We are experiencing an event that is likely to be as impactful as the industrial revolution. It has been happening for the past few years but OpenAI unlike Google decided to burst it out and share with the world. Although what we are seeing at the moment, let's say at least what is being described as semi AGI, is not, however impressive it seems. Companies are simply using a neural network that is more and more successful at predicting what people will accept as being correct. This also means it is frequently getting things wrong. Most of the chat type systems whether from OpenAI or Google are overconfident in nearly all exchanges. And it is also only good at certain things. The successes are predominantly in narrow or even niche fields. Is all the buzz and press about it was well worth the risk of letting the world experiment with it? It is now being used to create new tools for us all. And the massive investment in OpenAI and others just couldn't have been anticipated. Or could it?

If you are Facebook and you are fine with breaking it until it works, then this is all but a continuation of such an approach. But for companies that need security, accuracy, reliability and yes predictability, none of the current systems can ensure these points even in narrow fields. There are areas like explained below with Harvey AI (note: got to love the humble tag line; "for elite law firms") that are ripe for the picking, with narrow topics that have a high predictability. It's a slam dunk scenario in a lucrative industry like law firms, pretty obvious that money will pour in.

When the first articles about what we are currently seeing in terms of generative AI used the words predictive and predictable, I thought I understood how and where that would be used. Let's say the limits it would have. Now several months later I am seeing use cases I would not have expected for years.
Rules and regulations like law and even associated exams are fairly predictable, even spoken and written language without expecting Shakespeare's prose are predictable. The recent investments in Harvey AI by OpenAI and PwC (after launching a legal chatbot in March) prove this beyond a doubt. The projects created here consider the confidential and proprietary data being used and the need for high accuracy. But these investments stem from serious AI projects. It's not just a chatbot created using ChatGPT, the beta version of OpenAI chatbot, asking people not to use confidential info. It's Enterprise ready.
When AI generated images appeared of people, landscapes, I could see how the forms, faces and movement could be predictable. Even if they seemed to take the flexy fingers from 'Everything, Everywhere, All at Once' a bit too seriously at the beginning, that no longer seems to be an issue. Another example of how these systems are getting better with each new iteration / version that is released.

There has been a call for a moratorium on new releases of the most powerful engine behind these tools, like the version 4 of ChatGPT, until proper research and risk assessment is completed. Although some signatories seem less than honest, like Musk, buying in parallel tens of thousands of AI optimised GPUs for Twitter.

We have also seen the person considered as the modern inventor of AI, Dr Geoffrey Hinton, resigning from Google and starting a crusade to warn people about the major risks of AI if left unchecked. News networks have broadcast interviews where he describes potential dangers of AI without regulation and guardrails.

Dr. Hinton said he quit his job at Google, where he worked for more than a decade and became one of the most respected voices in the field, so he could freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work. NYTimes

Others like Jürgen Schmidhuber feel this is overblown, that the rise of artificial intelligence is inevitable but should not be feared. Slightly controversial opinion and interesting to see a different take on the topic of the future of AI, but would be nice to see what he considers supports these different opinions.

There is a competition between governments, universities and companies all seeking to advance the technology, meaning there is now an AI arms race, whether humanity likes it or not. TheGuardian

Although tools like ChatGPT, Bard, Bing (and the others waiting in the wings to show the world their own AI solution) are capable of so much, the New Yorker challenges our current enthusiasm about AI, how it should be used, and how likely it would replace the likes of McKinsey, with their track record of turbo charging Purdue's ability to sell more OxyContin if left unchecked. A look at how companies could use AI to improve the life of their employees rather than just following the approach of the millionaires CEO club and prioritising stakeholders above all. The Luddites redux for AI?

It will always be possible to build A.I. that pursues shareholder value above all else, and most companies will prefer to use that A.I. instead of one constrained by your principles.
[...]
Some might say that it’s not the job of A.I. to oppose capitalism. That may be true, but it’s not the job of A.I. to strengthen capitalism, either. Yet that is what it currently does. If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one. New Yorker

Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Sunday, September 24, 2023
The reliability & accuracy of GenAI

I question the reliability and accuracy of Generative AI (GenAI) in enterprise scenarios, particularly when faced with adversarial questions, highlighting that current Large Language Models (LLMs) may be data-rich but lack in reasoning and causality. I would call for a more balanced approach to AI adoption in cases of assisting users, requiring supervision, and the need for better LLM models that can be trusted, learn, and reason.

Read More
Saturday, September 23, 2023
From Chatbots to Reducing Society's Technical Debt

I discuss my experience with chatbots, contrasting older rules-based systems with newer GenAI (General Artificial Intelligence) chatbots. We cannot dismiss the creative capabilities of GenAI-based chatbots, but these systems lack reliability, especially in customer-facing applications, and improvements in the way AI is structured could lead to a "software renaissance," potentially reducing society's technical debt.

Read More
Friday, June 16, 2023
The imbalance of power in the AI game: in search of the common good

The article discusses the contrasting debate on how AI safety is and should be managed, its impact on technical debt, and its societal implications.
It notes the Center for AI Safety's call for a worldwide focus on the risks of AI, and Meredith Whittaker's criticism that such warnings preserve the status quo, strengthening tech giants' dominance. The piece also highlights AI's potential to decrease societal and technical debt by making software production cheaper, simpler, and resulting in far more innovation. It provides examples of cost-effective open-source models that perform well and emphasizes the rapid pace of AI innovation. Last, the article emphasises the need for adaptive legislation to match the pace of AI innovation, empowering suitable government entities for oversight, defining appropriate scopes for legislation and regulation, addressing ethical issues and biases in AI, and promoting public engagement in AI regulatory decisions.

Read More
Thursday, June 1, 2023
Japan revises copyright laws for AI

Japan has made its ruling on the situation between Content creators and Businesses. Japanese companies that use AI have the freedom to use content for training purposes without the burden of copyright laws. This news about the copyright laws in Japan reported over at Technomancers is seen as Businesses: 1 / Content Creators: 0 The […]

Read More
crossmenuarrow-down