Sustainable Enterprise AI Adoption: Protecting Confidentiality, Ensuring Accuracy, and Successful Business Integration

By    John Garner on  Tuesday, May 23, 2023
Summary: The public's recent access to breakthroughs in AI has sparked excitement but their integration into businesses often leads to significant issues, especially without proper management. Implementing AI effectively requires robust security measures to protect sensitive data, investment in unbiased technology, sufficient training for understanding AI systems, identification of the best AI use cases, assurance of reliable data sources, and careful management to prevent over-reliance on AI over human workforce. It's also critical to understand that AI systems like ChatGPT have their limitations and inaccuracies, and they need continuous monitoring and fine-tuning, while keeping in mind that these technologies have evolved from a long history of advancements, thanks to various companies and organizations.

The impact of the recent breakthroughs in AI made available to the public has been a source of excitement for businesses and organisations. However, their implementation within companies is often lackluster and prone to create major issues, when not properly managed. If you need to add a message saying "please do not upload client or confidential information to the system we are providing you with", it's a fail.

It is worth looking at the following points when you are thinking about implementing AI in your organisation:

  1. Developing robust security policies to protect sensitive and confidential data (from unauthorized users)
  2. Investing in technology and processes to eliminate and increase awareness of bias in AI outcomes
  3. Sufficient training, support, & information to understand both the advantages and complexities of AI systems
  4. Determining the best use cases for AI and integrating it into existing workflows
  5. Ensuring data sources are reliable/accurate, up-to-date, and relevant for AI models
  6. Companies pushing to replace large portions of their workforce with AI
  7. The levels of inaccuracy of current generative AIs
protecting confidentiality ensuring accuracy and successful business integration

1. Confidentiality and security of data
It's critical that data is both securely stored but also securely transmitted between a company and both its employees and users. It also needs to include trusted partners and 3rd parties who can ensure confidentiality and security of their systems and all data. You may have heard of the infamous Samsung case of a data leak from an employee using ChatGPT. Promoting and using systems based on beta versions of such systems for internal use, unless you are a small company, is fairly sloppy. While some companies like Apple are banning its employees from using ChatGPT, others like Goldman Sachs are looking to create their own bespoke versions internally. And this is the way to ensure confidentiality and security of both clients and your own data.
It's also essential for companies to establish strict access control policies for their AI systems. This means defining who has access to which data, under what conditions, and for what purpose. Companies must also put in place stringent authentication procedures to guarantee that only permitted individuals can access the data. Regular security training for employees can also help them understand the importance of these measures and how to follow them correctly. A solid data recovery plan should be established beforehand to prevent data loss from any unforeseen circumstances. This is a fairly traditional multi-layered security approach. It should not only protect sensitive data but also help maintain the integrity and credibility of the AI system, thus ensuring its effective deployment in an enterprise setting.

2. Absence of bias in AI output
A key factor for AI systems is the quality of the data sources. Companies also need to be sure that the data being used is unbiased. They should be able to allow an expert to review and understand why certain prompts result in a biased output. There should be a mechanism to understand the decision-making process of the AI. There also need to be rules or guardrails in place to avoid bias affecting the output. The quantity of training material will affect the general quality of the output of LLM. This can naturally affect bias. You may remember when Microsoft released a chatbot named Tay on Twitter without proper training and guardrails: it was a fiasco.
Furthermore, it will make sense to ensure there are ongoing audits and revisions of AI algorithms to identify and rectify biases over time. The mindset should be that AI is not a 'set it and forget it' solution, but a continually evolving tool. This should help reduce bias over time and as different aspects of AI tools evolve. It's also beneficial to incorporate a diverse team of engineers and developers who bring varying perspectives. This can help identify and mitigate potential biases. To ensure AI systems are appropriately mirroring and depicting the world as it shifts, this kind of approach is essential.
It is essential employees are aware of their ethical responsibility and help them understand the implications and responsibilities associated with AI usage.

3. Appropriate training and support
From the top to the bottom, everyone needs to be trained and use the AI systems to understand their advantages, their current limits and complexities. Without proper training and support, people will not use the AI tools correctly and effectively. While training, onboarding and support will have a big impact, the UX & UI of these systems can greatly increase adoption. Incorporating easy-to-understand user interface elements can make it simpler to welcome new users, and easier to use the product. Surveys and feedback sessions can help tweak and fine tune systems to make them more efficient and improve user satisfaction. This will also play a role in the level of adoption of such tools across the company. The poor usability or understandability of systems can lead to disappointing results, resulting in a higher rate of rejection. Besides, if users can't use or make sense of the rationale behind an AI system's decisions or the resulting outputs, they may lose confidence in the system. There is a risk of them reverting to more traditional, albeit less efficient methods, restricting the potential benefits of AI implementation. It can also be important for employees to understand the limitations of AI, that they are still needed, and AI is here to support them like an assistant in a lot of cases. Despite the fact that AI is often seen as a cause of job displacement, it can train and help employees to be much more efficient. Artificial Intelligence as an assistant could recognize patterns and problems in systems and general know-how. This could be used to prioritize important training that will help staff become more effective. Studies show that when used properly AI can help upskill lower performing employees (PDF).

4. Prioritising best use cases for AI and integration within workflows
AI solutions should be built, configured or adapted to address specific uses cases, where, as explained above, AI can help employees become more efficient. They may even replace certain tasks, and jobs, to make the company more efficient. But the creation of AI systems, without analysing where they can help the workforce and the company the most, is likely a gamble. Taking the most common AI examples and replicating them in your company is not a guarantee of success. It helps to demonstrate to all levels of a company how AI can help them become more efficient, starting right at the top. Helping your company not only understand but also experience how AI can help them is one of the best ways to promote internal adoption of these tools. Making statements like we're going to replace your department with AI soon is going to have the opposite effect, for example.
Just like other IT solutions, AI systems need to be continuously monitored and improved. Evaluating the AI system's performance, finding what needs to be improved, and making the appropriate adjustments. Routine performance reviews, revisions, and audits guarantee the AI system remains viable and effective in the long run.

5. Accurate, reliable, & relevant data sources
The outcomes and outputs of your AI system are only as good as the data sources you use for training and fine-tuning them. Although we talked about training end-users in a previous point, the relevance and quality of data sources are key to (pre-)training the actual AI systems. As mentioned in a previous post, the NYTimes interactive article explains how the training of AI models has a minimum threshold. A threshold below which models are nearly unusable. There is also what is called fine-tuning to further refine and improve certain aspects of an AI.
But they are also far less useful if the data sources are not relevant. Recent research has shown that a base generalist model (ie not specialised in say legal matters) can help with the empathy factor of systems. AI systems that are only using specialised data sources can be both abrupt and miss things just like humans that lack empathy.

6. Replacing too much of the workforce with AI
Senior management may consider substituting their workforce with AI. With only corporate profits in mind, however, they may not realise the side effects this may have. Talking about the day-to-day operations of a company with the people responsible for it reveals the finer points. You realise the vast amount of knowledge and ability the workforce has to keep things running. And this is different to a company that looks to provide systems that support the workforce. The latter seeks to incorporate systems and tools into the workflow to enhance employee productivity.
When I hear C-Level management saying that they want to go full digital deflection, it reminds me of how disastrous going to such extremes can be. Strong face-to-face skills can be decisive for the success of a company. Customers are frequently confronted with issues that need to be resolved. If they need support that involves empathy and handling their anger, AI is not currently the best solution.
Accenture research has predictions on the impact of AI in their study "A new era of generative AI for everyone (PDF)". As you can see below, it is expected to have a large impact on less manual activities:

Generative AI will transform work across industries

7. Accuracy of Output
Being a bit of a perfectionist myself, systems like ChatGPT in its current form (including version 4) are a nightmare. Why, well because not only is it unlikely you will get the same answer each time, so it's not a very predicatable outcome, but these systems frequently get things wrong. To top it, the tone chosen is often an authoritative one, so can be misleading. LLM models work best when the outcome is predictable, but they rarely provide the same results when asked the same question. This is a major issue for companies that use such a system without proper training, refinements and guardrails. It's also important to force the system to only use a specific (set of) data source(s) and preventing it from using data, systems, or sites outside of the company. Letting an AI run without such guardrails will be highly risky.
But the implications are far-fetching if you consider some of the areas that AI is going to be used in. The decisions made by AI systems need to be understandable and transparent. AI will be used in areas like legal, health, finance etc. The impact of inaccurate or inappropriate decisions could have dramatic effect not only on people and companies but also society.

Monitoring AI created content
Keeping an eye on AI-related content that is made and talks about them is a valuable point for companies to consider. AI was recently used to create music with the cloned voice of Drake and last year a deepfake video of the president of Ukraine surrendering to Russia. Companies and organisations should be ready and alert to AI-generated content that could be used to deceive and hurt the public, other companies and institutions. Just today, there was a fake image of an explosion at the Pentagon that created a viral scare in the US, apparently causing a real drop at the stock market.

Side notes: Now, if you think these generative and predictive models generating many types of content are just bursting out, this is not true. The technology used for this has been around for ages, beginning with the predictive / autocorrect systems in applications such as MS Word. The first intelligent virtual keyboards on our smartphones had their own data sources and added to that with personal dictionaries, learning from what we typed to personalise and improve our experience.

There have been a few breakthroughs, notably from Google, that had an exponential impact on the AI industry. Even though OpenAI is getting a lot of credit for their latest generative tool, it is thanks to the critical work of multiple companies and organisations that have shared their findings in the past.

Generative AI systems, and in the case of tools like ChatGPT, that are based on LLM, Large Language Models, are basically types of neural networks. To make it simpler, these networks are imitating how a brain works, with sets of neurons each taking on specific tasks / functions and all connected. They are like a huge library and a big neural network combined. These two components working together can determine the words with the greatest probability of being what you are looking for. They are not Artificial General Intelligence (AGI) systems, that can think for themselves or sentient beings they are just extremely good at predicting good content. AGI is likely to need completely different models and technology compared to the generative AI systems that are becoming so popular now.


In conclusion, AI is opening doors to new possibilities across various industries, promising efficiency and innovation. However, the integration of AI into an organisation's workflow is a complex process that requires a deliberate and thoughtful and sustainable approach.

Firstly, companies must guarantee the security and confidentiality of data, guarding sensitive information while maintaining the integrity of AI systems. A multi-layered security approach, including strict access control policies, stringent authentication procedures, and regular security training for employees, can help achieve this.

Secondly, mechanisms need to be put in place to carefully manage the risk of bias in AI outcomes. Companies need to put mechanisms in place to understand the decision-making process of the AI, conduct regular audits of AI algorithms, and foster a diverse team of engineers and developers to identify and rectify biases. The incorporation of AI should be viewed as a continuous process of evolution rather than a onetime solution.

Thirdly, appropriate training and support for all employees are crucial to facilitate the correct and effective use of AI tools. User-friendly interfaces, regular feedback sessions, and clear communication about the limitations and potentials of AI can improve the adoption and performance of these systems.

Fourthly, AI solutions should be prioritised based on specific use cases that enhance efficiency and productivity. Implementation without careful analysis of where AI can provide the most value may result in suboptimal outcomes.

Fifthly, the quality and relevance of data sources used for training AI systems are key to their performance. Fine-tuning and continuous improvement of AI models are necessary to ensure they remain relevant and effective.

Sixthly, replacing large portions of the workforce with AI should be approached cautiously, considering both the potential efficiency gains and the valuable skills and knowledge of existing employees.

Lastly, the accuracy of AI outputs is critical, particularly in sectors like legal, health, and finance, where decisions can have significant implications. Ensuring transparency and understandability in AI decision-making can mitigate risks associated with inaccuracies.

As we navigate the AI era, it is crucial to keep in mind that while AI can deliver remarkable capabilities; it is a tool that currently works best to assist, not replace, human intelligence and judgement. A balanced approach that respects both the power and the limitations of AI will pave the way for successful, sustainable integration into our workplaces and societies.

Article written by  John Garner

Leave a Reply

Your email address will not be published. Required fields are marked *

Recent Posts

Check out the most recent posts from the blog: 
Sunday, September 24, 2023
The reliability & accuracy of GenAI

I question the reliability and accuracy of Generative AI (GenAI) in enterprise scenarios, particularly when faced with adversarial questions, highlighting that current Large Language Models (LLMs) may be data-rich but lack in reasoning and causality. I would call for a more balanced approach to AI adoption in cases of assisting users, requiring supervision, and the need for better LLM models that can be trusted, learn, and reason.

Read More
Saturday, September 23, 2023
From Chatbots to Reducing Society's Technical Debt

I discuss my experience with chatbots, contrasting older rules-based systems with newer GenAI (General Artificial Intelligence) chatbots. We cannot dismiss the creative capabilities of GenAI-based chatbots, but these systems lack reliability, especially in customer-facing applications, and improvements in the way AI is structured could lead to a "software renaissance," potentially reducing society's technical debt.

Read More
Friday, June 16, 2023
The imbalance of power in the AI game: in search of the common good

The article discusses the contrasting debate on how AI safety is and should be managed, its impact on technical debt, and its societal implications.
It notes the Center for AI Safety's call for a worldwide focus on the risks of AI, and Meredith Whittaker's criticism that such warnings preserve the status quo, strengthening tech giants' dominance. The piece also highlights AI's potential to decrease societal and technical debt by making software production cheaper, simpler, and resulting in far more innovation. It provides examples of cost-effective open-source models that perform well and emphasizes the rapid pace of AI innovation. Last, the article emphasises the need for adaptive legislation to match the pace of AI innovation, empowering suitable government entities for oversight, defining appropriate scopes for legislation and regulation, addressing ethical issues and biases in AI, and promoting public engagement in AI regulatory decisions.

Read More
Thursday, June 1, 2023
Japan revises copyright laws for AI

Japan has made its ruling on the situation between Content creators and Businesses. Japanese companies that use AI have the freedom to use content for training purposes without the burden of copyright laws. This news about the copyright laws in Japan reported over at Technomancers is seen as Businesses: 1 / Content Creators: 0 The […]

Read More
crossmenuarrow-down