AI tools are spreading across workplaces faster than most companies expected. From assistants that summarise documents to systems that analyse internal conversations, artificial intelligence is quickly becoming part of everyday operations.
But alongside the excitement around productivity, another conversation is gaining momentum: where AI actually runs and who controls the data it processes.
A recent article from ITPro points to a growing trend across Europe. More organisations are beginning to look at region-specific or locally deployed AI platforms, especially when sensitive data is involved. Security, compliance requirements and concerns around data sovereignty are pushing companies to rethink whether critical AI workloads should run entirely outside their own infrastructure.
This shift is already visible in highly regulated sectors such as finance, healthcare and government. But the conversation is quickly expanding to other industries as well. As AI becomes embedded in everyday workflows, companies of all sizes are starting to ask a simple question:
Where does our data actually go when AI processes it?
Meeting AI creates a new data exposure layer
AI meeting assistants are one of the fastest-growing categories of workplace tools. They record conversations, generate transcripts, summarise discussions and extract insights automatically.
That convenience comes with a hidden reality: meetings often contain some of the most sensitive information inside a company.
Strategy discussions.Product roadmaps.Financial forecasts.Customer issues.Internal disagreements.
When those conversations are processed by AI, the recordings and transcripts typically pass through multiple systems - transcription services, AI models, storage layers and analytics tools. Each step introduces potential exposure if the infrastructure is not carefully controlled.
AI systems introduce new security risks
Security failures are not only technical problems - they are financial ones.
According to the IBM Cost of a Data Breach Report, the global average cost of a data breach remains in the millions of dollars when investigation, response, downtime and reputational damage are taken into account.
As more AI systems process internal company data, the volume of sensitive information flowing through automated tools increases. Without strong security architecture, that exposure grows as well.
Why companies are reconsidering where AI runs
Because of these risks, organisations are starting to rethink where AI processing should happen.
Instead of sending internal conversations to external services by default, some companies are exploring private AI deployments where models, storage and processing remain inside their own infrastructure.
In this model, recordings, transcripts and insights are still generated automatically - but the data never leaves the organisation’s environment unless explicitly configured.
For security teams, that difference can be significant.
A quieter shift in meeting intelligence
AI is transforming how organisations work with knowledge. Meetings are no longer just conversations - they are becoming structured data that can be searched, analysed and reused.
But that transformation also raises a fundamental governance question: who controls the infrastructure that processes those conversations?
This is exactly the challenge we had in mind when building Ulla Notetaker.
From the beginning, we designed Ulla as a meeting intelligence platform that could operate inside an organisation’s own infrastructure. Instead of forcing companies to send sensitive meeting data to external environments, Ulla can be deployed as a self-hosted solution, allowing recordings, transcripts and AI processing to remain fully under the organisation’s control.
As AI becomes more deeply embedded in everyday work, questions about productivity are increasingly turning into questions about data governance and security.
And increasingly, organisations want that control to stay inside their own walls.
