With the huge growth in the use of ChatGPT, Microsoft Copilot and other AI tools, we give advice here on considerations for its use within your organisation and how to ensure your confidential data remains secure.
Remarkably, within just two months of its launch, Chat GPT reached 100 million monthly active users. The use cases for generative AI are huge and range from drafting an email to planning a presentation or summarising a lengthy document. The most popular of these tools are OpenAI’s ChatGPT and Microsoft’s Copilot for 365.
The value of AI across industries is undeniable, as businesses increasingly seek ways to elevate client experiences and streamline operations. AI-powered tools are becoming go-to solutions for automating customer service and delivering personalised support.
Are your team members already using ChatGPT and other LLMs for work?
49% of organisations are already using Chat GPT. All chatbots or knowledge agents use Large Language Models (LLMs) to understand your question, find answers (either using pre-trained knowledge or in combination with a web search for up-to-date information), and then write you a succinct and hopefully helpful answer.
This applies both for agents that are used to answer questions on publicly available data and those that use your own private information that you supply to it in the prompt. A Large Language Model (LLM) is a piece of software that needs to run on a computer (whether that’s a server, a PC or even a Phone).
ChatGPT and Microsoft Copilot both use the same LLM – currently GPT4o which was developed and trained by OpenAI.
When you use ChatGPT (also from OpenAI) it uses a copy of GPT4o that OpenAI are hosting on their servers. When you use Microsoft Copilot it uses a copy of GPT4o that is running on a server hosted by Microsoft.
Who is OpenAI and who owns it?
OpenAI is a US-based private business managed as a “capped-profit” organisation that was founded originally in 2015 by Sam Altman and Elon Musk (who has since exited the business). Microsoft invested $1 billion in OpenAI in 2019 and committed another $10 billion to the AI innovator in 2023.
Key Differences between Microsoft Copilot and ChatGPT?
So, what are the main differences between ChatGPT and Microsoft Copilot (and Microsoft’s other AI offerings)? Here we outline some of the key areas that are useful to understand:
Geographic Control of Data Residency
One of the key differences with Microsoft’s AI is that when you use any of their AI services (including Microsoft Copilot or Azure OpenAI for example), the LLM is being run on a server in Microsoft Azure, which will be in a geographic region you have chosen to deploy it to, or in the case of Copilot (where you don’t choose) it will be in the same geographic region as your Microsoft tenant (so ticks the data residency box).
Where is ChatGPT data stored?
ChatGPT data has its primary storage is in the US, but some data may be processed or held transiently outside the region by trusted service providers for tasks like data annotation. Prior to March 1st, 2023, OpenAI used customer data as part of a process known as “fine-tuning to in effect teach its model based on customer usage.
Data Privacy and Security
As Microsoft has stated publicly, all their existing privacy commitments extend to their commercial products. This is important, as it means that any information that you submit to Copilot stays within the security boundary of your Microsoft tenant. Azure OpenAI allows for private implementations of OpenAI models, ensuring data security and compliance with enterprise standards like GDPR and ISO 27001.
This means that when using Copilot or any other Microsoft AI service (including anything you build yourself in Azure) your data is safe and stays within the security boundary of your organisation
How Secure is data in ChatGPT?
Company information that you submit into ChatGPT is leaving your organisation and is now outside your control – it is most likely going to the USA (which you may or may not want), and how can you be sure what OpenAI may or may not do with that data you have sent them?
We recommend abstaining from sharing private company information on ChatGPT, such as:
- Intellectual Property and Creative Works
- Financial Information
- Sensitive Company Data
- Personal Data
- Username and Passwords
Do you need advice and guidance on this? Our expert consultants are ready to help:
How are Large Language Models (LLMs) trained?
One thing you should think about is how any sensitive information you submit to either ChatGPT or Copilot may be used by either OpenAI or Microsoft. Will that data be used to train the model e.g. GPT4o and thus become part of its pre-trained knowledge that anyone can then access?
Training a model is a big piece of work using a new batch of data that has been gathered. It is a very deliberate series of actions, and not something that happens accidentally.
The model itself (e.g. GPT4o) is stateless which means that when you send in a prompt (e.g. ask a question) and it then generates a response for you (e.g. the answer), as soon as you have received it then all that content is gone and no trace of it is left in the model – it doesn’t retain any of it.
However, there might be a case where you are asking an AI agent a series of follow-on questions, for example, you want it to know the context of your question i.e. to “remember” what has gone before.
So, how does this work? This “memory” isn’t in the model itself (which is stateless) it’s in the chat history in another part of the agent (the LLM is the most important part, but not the only part), and each time you ask a follow on question it feeds that and the chat history back in to the model to get a response on all of .
Microsoft does not use your data for training foundational models – you can read their commitment to that here: Protecting the data of our commercial and public sector customers in the AI era – Microsoft On the Issues
Does ChatGPT store your data?
ChatGPT does remember your conversations to provide context to future queries, using the Memory function, which is typically turned on as a default. You can control what ChatGPT does and doesn’t remember by managing an individual’s settings. But the feature still raises questions about privacy and data control. And if you’re running a business, there are additional considerations around what your teams might share—and what gets stored—when memory is turned on.
Access control and enterprise policies
Within an organisation, you already have a hierarchy and systems to control access to your data within your existing Microsoft tenancy. This ensures that only authorised people have access to commercially sensitive data.
To protect privacy within your organisation when using enterprise products with generative AI capabilities from Microsoft, your existing permissions and access controls will continue to apply to ensure that your data is displayed only to those users to whom you have given appropriate permissions.
Although ChatGPT does have an enterprise version, in most cases users are simply using the freely available version which does not have any of these controls. And ChatGPT only offers one element of AI whereas the Microsoft AI stack offers a wide range of AI tools across its portfolio.
Third Party Sharing
Microsoft does not share your data with third parties without your permission. Your data, including the data generated through your organisation’s use of Copilot or Azure OpenAI Service – such as prompts and responses – is kept private and are not disclosed to third parties.
How do I keep my organisation’s data secure when employees are using AI?
We recommend that it is good practice for all organisations to have a policy around the use of AI within their organisation.
There is new legislation that came into force very recently on 19 June 2025 in the UK – the Data (Use and Access) Act (DUAA).
The DUAA amends, but does not replace, the UK General Data Protection Regulation (UK GDPR), the Data Protection Act 2018 (DPA) and the Privacy and Electronic Communications Regulations (PECR).
The government has committed in Parliament to asking the Information Commissioner’s Office (ICO) to produce codes of practice on ‘artificial intelligence’ as it has been recognised that this is lacking in the current guidance. Until this guidance is available, it is important to create your own AI use policy.
Creating an AI use policy
We don’t claim to be experts in creating company policies, but here are a few sensible and simple steps that you can take:
- Carry out an audit to evaluate which AI tools are already in use within your organisation that employees have individual started to use.
- If using third-party vendor AI tools, take care to evaluate their privacy policies and terms of service.
- Clearly communicate how AI is used within your organisation, the types of decisions it makes on your behalf, and the potential risks involved.
- Ensure you have robust data governance policies in place to define the collection, storage and use of AI-generated data.
- Make sure you have clear security measures to protect your AI systems and data from unauthorised access.
- Provide training to employees on the use of AI tools, share the potential risks, and outline ethical considerations that apply to your organisation.
While the ICO has not completed its code of practice for AI, there is some useful initial guidance available here.
In Summary
As with all online hosted information, and specifically with AI tools, it is good practice to ensure that you ensure that there are strict security controls in place to protect your confidential, business sensitive or personal data.
The key differences between Microsoft Copilot and ChatGPT are that:
- Any information which you enter into Copilot or that it produces does not leave the security boundary of your organisation (so it is secure and private)
- Your instance of Copilot is running in the same Geographic region as your O365 tenancy (so meeting any data residency requirements you may have)
- Microsoft do not use your data to train foundational models
While advising individual caution, businesses must take a proactive approach to AI security by implementing clear policies and safeguards. Relying on employees to avoid sharing sensitive data is not enough, organisations should establish strict access controls, formal AI usage guidelines, and continuous monitoring to mitigate risks effectively.
What Next?
Our expert team of Microsoft consultants have successfully delivered AI projects, and Microsoft has recognised our expertise through our Microsoft technical certifications and customer experience with Solutions Partner Designations for Azure – Data & AI and for Modern Work.
If you’re considering how you can get started with AI in your organisation and need some advice and support please get in touch.
One final thing
If you’ve enjoyed reading this Blog Post, then sign up at the bottom of this page to receive our monthly newsletter where we share new blogs, technical updates, product news, case studies, company updates, Microsoft and Cloud news (scroll down to the sign up block on this page)
We promise that we won’t share your email address with other business or parties, and will keep your details safe. You can choose to unsubscribe at any time.
Contact our Microsoft specialists
Phone or email us to find out more – or book a free, no-obligation call with our technical consultants using the contact form.
“It’s great to work with the Compete366 team, the team members are really knowledgeable, helpful and responsive. No question is too difficult for them. They have really helped us to manage our Azure costs and ensure we have the right environment. When we bring a new customer on-board we can scale up immediately via the Azure portal and quickly make environments available to our customers.”
“We also find that there’s never a heavy sales pitch from them – they are technically focused and recommend what’s right for us.”
“We had great support from the Compete366 AVD expert, who was really helpful, and guided me through options to tackle issues that arose.”
“The great thing about our AVD set up is that we have a custom set up for each project which Compete366 showed me how to do. And with the scalability and flexibility of AVD – we can meet clients’ expectations and get project users up and running more quickly.”
“We were immediately impressed with the advice that the Compete366 specialists in Azure Architecture were able to provide. This was all new to us and we really needed some external expertise that we could use to get our questions answered. The beauty of working with Compete366 is that we transferred our Azure consumption to them, and at the same time received all of their advice and guidance free of charge.”
“Working with Compete366 has been like extending our own team – they are extremely and easy to work with. Right from the outset, it was clear what was on offer – everything was presented to us in a straightforward and uncomplicated way. They also provided just the right level of challenge to our developers and saved us time and money by suggesting better ways to implement our infrastructure.”
“Compete366 were able to help us leverage some useful contacts in Microsoft. We really value the expert advice and guidance that they have offered us in setting up a highly scalable infrastructure. We are also setting in place a regular monthly meeting which will allow us to further refine our architecture and ensure we keep on track as our requirements grow and change.”
“I have been delighted with the migration, where my team worked very hard, supported by expert advice from Compete366, and achieved everything in the timescale we had set out. Compete 366 made sure that we didn’t make any expensive mistakes, and guided us through the process”