When Deepseek, the AI Large Language Model (LLM) from China was launched back in January 2025, it caused shockwaves throughout the industry, with US President Donald Trump calling its rise “a wake-up call” for the US tech industry.

DeepSeek, which claims its model was made at a fraction of the cost of its rivals, became the most downloaded free app in the US just a week after it was launched. So, what has caused all the interest?

Unlike traditional models, DeepSeek claims to leverage innovative techniques, aggregate existing models, and delivers exceptional performance across diverse tasks, all while maintaining affordability.

Apparently, the cost of training DeepSeek was a fraction of what major US companies had typically spent on training new models. It was trained by repurposing older-generation, less powerful GPU chips, which enabled DeepSeek to achieve outstanding performance at an affordable cost.

By addressing the hardware limitations of other AI LLM models, DeepSeek enables users to leverage cutting-edge AI without requiring such expensive infrastructure.

The good news is that DeepSeek and its new models are now available within the Microsoft Azure Foundry AI ecosystem. This innovative technology unlocks opportunities for developers and businesses, setting new benchmarks in agility, accessibility, and performance, and has the capacity to revolutionise the way organisations leverage AI on the Microsoft Azure Infrastructure.

If you’d like to find out more about how to implement DeepSeek on Azure in your organisation, then please contact us for a free discussion with one of our Certified Cloud technical consultants

Why is DeepSeek a disruptive force in AI?

DeepSeek’s disruptive impact can be attributed to several key factors:

Cost-Efficiency

DeepSeek managed to attain world-class results at a fraction of the expenditures incurred by other leading AI firms. It challenged the received wisdom that training LLMS needs to incur huge costs, such as the investments made by US giants  like OpenAI and Google who spend billions on infrastructure and specialised hardware.

DeepSeek has shown that it is possible to train high-quality AI models at a fraction of the cost.

This lowers the barriers to entry and makes AI solutions more accessible to businesses and developers, enabling a wider adoption of advanced AI capabilities.

Hardware Innovation

Because DeepSeek prioritises efficiency, it has introduced sustainable models that are less reliant on costly software.   It has achieved this through more innovative use of hardware.  For example, DeepSeek made optimal use of older-generation chips instead of advanced GPUs that were restricted.

DeepSeek’s approach has challenged the traditional dependency on expensive, cutting-edge hardware, which has caused a ripple effect, triggering shifts across the AI and semiconductor industries. For example, Nvidia’s stock value dipped following DeepSeek’s launch.

Open-Source Philosophy

DeepSeek promotes an open-source approach, unlike proprietary models. It removes licensing constraints, thus enhancing accessibility for developers, businesses, researchers, and organisations worldwide. The technology is freely available across industries, regardless of size, and across geographies, too.

Resource Optimisation

DeepSeek adds its own unique innovations like Mixture-of-Experts (MoE) and specific optimisations for reasoning and computation, differentiating itself from established models, which contribute to its disruptive impact in the AI industry. By utilising established architectures, DeepSeek avoids the need for extensive development from scratch, reducing costs and complexity.  DeepSeek takes the approach of aggregating elements from multiple foundational models. allowing it to outperform traditional models in terms of efficiency and adaptability.

DeepSeek available on Microsoft Azure AI Foundry

Microsoft is now leveraging the DeepSeek infrastructure and services on Azure AI Foundry. This collaboration enhances the capabilities of both platforms, providing advanced AI solutions to businesses through a trustworthy, scalable, and enterprise-ready environment.

DeepSeek R1 excels at reasoning tasks using a step-by-step training process, such as language, scientific reasoning, and coding tasks. It features 671 billion total parameters with 37 billion active parameters and a 128k context length. Microsoft has introduced DeepSeek R1 as part of a diverse portfolio of over 1,800 models, including frontier, open-source, industry-specific, and task-based AI models. There are now over 11,000 different models available in Azure AI Foundry, and as well as DeepSeek R1, DeepSeek V3 is also available on Azure.

This integration allows businesses to seamlessly incorporate advanced AI while meeting SLAs, security, and responsible AI commitments—all backed by Microsoft’s reliability and innovation.  This approach highlights why Microsoft is named a Leader in the 2024 Gartner Magic Quadrant for Cloud AI Developer Services.

What is Microsoft Azure AI Foundry?

Microsoft recently integrated Azure AI Foundry, previously known as Azure AI Studio, to create a unified, secure environment for developers and IT administrators to:

    • Build, modify, and deploy AI applications and agents.
    • Access a broad set of AI tools through a single portal, SDK, and APIs.
    • DeepSeek R1 is part of an extensive library comprising over 11,000 models (frontier, open-source, industry-specific, and task-based models).  It is provided within Azure AI Foundry which provides secure, scalable access to advanced AI technology.

By leveraging the expertise of a consultant, you can ensure a smooth and efficient setup of your new office location(s), addressing both strategic and technical challenges effectively.

What are the benefits of using DeepSeek in Azure?

Some of the benefits of using DeepSeek in Azure include:

  • Productivity
    • Using DeepSeek R1 on Azure AI Foundry significantly enhances the speed at which developers can experiment with, iterate on, and incorporate AI into their processes.
    • Model evaluation is made straightforward through the platform’s built-in tools that allow for rapid output comparisons, performance benchmarking, and effortless scaling of AI applications.
  • Cost Efficiency
    • DeepSeek R1 is constructed for achieving best value for money while maintaining efficiency by resource optimising using large foundational AIs like OpenAI’s GPT-4 and Meta’s LLaMA as a framework, achieving excellent optimisation in resource processing.
  • Scalability
    • DeepSeek’s approach of building on pre-existing architectures makes it easier for the system to scale and become a highly capable system while requiring less computational resources. This allows for a wider range of applications as resource is less of a constraint.
  • Trustworthy AI
    • Microsoft has put DeepSeek through rigorous safety evaluations and extensive security reviews to address potential risks, providing a secure, compliant, and responsible environment for enterprise AI deployment.
    • Azure AI Content Safety is built in, including automatic content filtering with opt-out options for flexibility.
    • Automated tools for assessing application outputs pre- and post-deployment.

How to get started with DeepSeek on Azure AI Foundry?

To use models from the DeepSeek model catalogue on Azure AI Foundry, you first need to create an Azure AI Foundry project and navigate to the model catalogue. Then, search for the specific DeepSeek model you want to use.

Developers can create deployments to consume predictions from the models, and resources can be connected to Azure AI Hubs and Projects in Azure AI Foundry to build intelligent applications.

The following picture shows the high-level architecture.

DeepSeek in Azure

What are the costs for using DeepSeek on Azure AI Foundry?

Language models understand and process inputs by breaking them down into tokens.  Costs in Azure AI Services are calculated per 1,000 tokens and apply to both input and output tokens and token costs differ based on the model series chosen.

Azure AI Foundry has no specific page in the Azure pricing calculator. Azure AI Foundry is composed of several other Azure services, some of which are optional. When you use Azure AI services in Azure AI Foundry portal, costs for Azure AI services are only a portion of the monthly costs in your Azure bill. You’re billed for all Azure services and resources used in your Azure subscription, including the third-party services. Refer to the Azure AI Foundry pricing page for the latest pricing, model catalogue and services available.

Currently, the charge for DeepSeek R1 is set at 2.36 USD per 1M tokems. Be advised, however, that usage is controlled with rate limits, which are subject to change without prior notice. Pricing will probably vary at some point, and your ongoing consumption will fall within the new proposed pricing. Furthermore, as the model is still in preview, a new deployment may be needed to continue using it.

While working with Models, Azure OpenAI models and models offered as first-party consumption services from Microsoft (including DeepSeek) are charged directly and they show up as billing meters under each Azure AI services resource. This billing happens directly through Microsoft.

What Next?

If you’d like to find out more about using DeepSeek within Azure AI, get in touch to book your initial free of charge call.

One final thing

If you’ve enjoyed reading this Blog Post, then sign up at the bottom of this page to receive our monthly newsletter where we share new blogs, technical updates, product news, case studies, company updates, Microsoft and Cloud news (scroll down to the sign up block on this page)

We promise that we won’t share your email address with other business or parties, and will keep your details safe. You can choose to unsubscribe at any time.

Published On: May 30th, 2025 / Categories: Azure, M365 / Tags: , , , /

Contact our Microsoft specialists

Phone or email us to find out more – or book a free, no-obligation call with our technical consultants using the contact form.

“It’s great to work with the Compete366 team, the team members are really knowledgeable, helpful and responsive. No question is too difficult for them. They have really helped us to manage our Azure costs and ensure we have the right environment. When we bring a new customer on-board we can scale up immediately via the Azure portal and quickly make environments available to our customers.”

“We also find that there’s never a heavy sales pitch from them – they are technically focused and recommend what’s right for us.”

Paul Coyne, Rusada

“We had great support from the Compete366 AVD expert, who was really helpful, and guided me through options to tackle issues that arose.”

“The great thing about our AVD set up is that we have a custom set up for each project which Compete366 showed me how to do. And with the scalability and flexibility of AVD – we can meet clients’ expectations and get project users up and running more quickly.”

Amir Dangol, Senior IT Manager, Integrity

“We were immediately impressed with the advice that the Compete366 specialists in Azure Architecture were able to provide. This was all new to us and we really needed some external expertise that we could use to get our questions answered. The beauty of working with Compete366 is that we transferred our Azure consumption to them, and at the same time received all of their advice and guidance free of charge.”

Tim Entwistle, Head of Software Development, Herrco

“Working with Compete366 has been like extending our own team – they are extremely and easy to work with. Right from the outset, it was clear what was on offer – everything was presented to us in a straightforward and uncomplicated way. They also provided just the right level of challenge to our developers and saved us time and money by suggesting better ways to implement our infrastructure.”

Oliver Mackereth, Project Director, Hanse

“Compete366 were able to help us leverage some useful contacts in Microsoft. We really value the expert advice and guidance that they have offered us in setting up a highly scalable infrastructure. We are also setting in place a regular monthly meeting which will allow us to further refine our architecture and ensure we keep on track as our requirements grow and change.”

Matt Brocklehurst, Technical Director - AWOL Adventure

“I have been delighted with the migration, where my team worked very hard, supported by expert advice from Compete366, and achieved everything in the timescale we had set out. Compete 366 made sure that we didn’t make any expensive mistakes, and guided us through the process”

Darrell Cann, Managing Director, APEX
Jon Milward
Director

By submitting your details, you agree to be contacted.