AI Meeting Assistant Security and Privacy: A Guide for 2025
According to a recent study by McKinsey, 2024 was the year organizations began driving business value from generative AI. In fact, the percentage of organizations who reported using Gen AI in the workplace nearly doubled in one year, according to the latest McKinsey Global Survey.
However, one common challenge leaders face is a lack of awareness about the privacy and security risks associated with AI tools and how to mitigate them.
AI meeting assistants like Fellow record, transcribe, and summarize key meeting decisions, saving teams hours of admin work. But as useful as they are, AI meeting assistants also come with significant security risks if not thoroughly vetted and implemented correctly. The primary risk is your sensitive business or customer data being leaked or used to train large language models (LLMs).
While many AI meeting assistants are indeed safe to use, problems arise when an organization doesn’t vet and approve a particular tool. When employees sign up on their own and invite bots to meetings, organizations can’t govern what gets recorded and who has access to those recordings That means any one employee could be putting the whole organization at risk.
The solution is for companies to assess, implement, and govern a single AI meeting assistant for use org-wide, with assistance from an implementation team that supports your organization in the process. During that process, there are some questions that need to be asked about the tools under consideration to ensure they don’t pose a risk.
Here, we’ll take a closer look at what those risks are as well as what to look for when onboarding a new AI meeting assistant.
Addressing the security risks of an AI meeting assistant
Meetings are where employees discuss the insider details of how your business operates — everything from financial details, to new products, to competitor analysis, to sensitive matters. When you use an AI meeting assistant, it gains access to all of the information it records — and that’s not inherently a bad thing. It’s sharing that information that makes AI meeting assistants so incredibly valuable as a productivity and organizational tool.
But of course, the flip side is that all that information needs to remain secure and confidential. That’s why the risk that’s likely to concern any organization’s IT or security director is that data being leaked.
With an AI meeting assistant, the main way data can be exposed has to do with the fact that AI meeting assistants use large language model (LLM) AI technology. Your data has to be passed along to the LLM in order for it to provide meeting summaries, and insights.
What you don’t want to happen is for those LLMs to be trained on your data, or for the LLM vendor to hold onto your data indefinitely, putting it at risk of leaking.
8 Security details to consider when choosing an AI meeting assistant
Ask all of the following questions when considering a new AI meeting assistant.
1. Is your data being used to train LLMs?
An AI meeting assistant can only work its magic thanks to the technology of LLMs. Large language models are AI systems that are trained on a large quantity of text to understand how humans write and speak. Using that data, it’s able to accurately generate and understand content, allowing it to perform functions like summarizing a meeting or answering questions about a recorded sales call.
LLMs can keep learning new information to improve their performance, but what you don’t want is for your organization’s data to be included as training material. If that were to happen, the LLM could repeat your information back to other users outside your company, including competitors.
With all that in mind, any AI meeting assistant you’re considering should have a transparent policy that ensures any data it collects is not used to train LLMs.
2. How long is your data being retained?
An AI meeting assistant will need your data to work, but the next concern to raise is how long they hang onto that data. The longer the tool has your data, the more data can be revealed in the case of a breach.
Look for an AI meeting assistant that discloses how long they keep your data and, ideally, allows you to determine that on your own.
For example, ask if there’s an option to delete meeting data older than, for example, one year, if that’s what aligns with your organization’s policies. Your meeting notes are only accessible as long as the note taker still has your data, but you should always be the final say in when that stops.
That’s why, as well, your data should always be available to be deleted at will. This is mandatory for any AI meeting assistant offered in Europe to comply with GDPR.
3. What compliance certifications have been earned?
Speaking of GDPR, it’s one of several key terms to look for when vetting a new AI meeting assistant.
These are industry-standard certifications and regulations to look for that ensure proper data privacy:
- SOC 2: This is a report generated through an official audit that ensures data protection when that data is stored in the cloud or data is processed on behalf of clients. It’s a rigorous process primarily recognized in North America.
- ISO 27001: This is similar to SOC 2 but is presented as a certification and isn’t limited to cloud-based SaaS companies. The presence of either ISO 27001 or SOC 2 demonstrates strong security and privacy standards, depending on regional or industry requirements.
- GDPR: The General Data Protection Regulation (GDPR) is a European Union law designed to protect the personal data and privacy of individuals. Unlike SOC2 and ISO 27001, GDPR isn’t voluntary but a legal requirement for doing business in the EU and includes provisions like being able to view, download, and delete your own data.
- HIPAA: This is a policy in the United States that protects personal health information. It’s not a certification, but a type of compliance demonstrated by a company’s practices. If your meetings involve any personal health information, HIPAA compliance from your AI meeting assistant is crucial and mitigates serious legal consequences.
Make sure you know which legal requirements matter for your own organization’s data and ensure any AI meeting assistant you adopt is in compliance. As well, ensure that, at minimum, the note taker has either SOC 2 or ISO 27001 compliance. Fellow, for example, is GDPR ready, SOC 2 certified, and HIPAA compliant.
4. What are the security standards of the AI vendors?
Virtually all AI meeting assistants use third-party AI vendors to actually process data and return results. Some common LLM vendors include OpenAI, Anthropic, and Cohere.
It’s important that your chosen AI meeting assistant has created deals and standards with these vendors that are as good or better than the AI meeting assistant’s own protocols. For example, it’s not very useful if you can delete your data with the AI meeting assistant, but the AI vendor hangs onto that data indefinitely. Or, if your AI meeting assistant is SOC 2 compliant, all of their AI vendors should be SOC 2 compliant as well.
Get into the nitty gritty and ask any potential AI meeting assistant about their third-party vendors and how they ensure privacy protocols are maintained when your data is passed along.
Fellow takes great care in choosing its vendors, and we ensure that the vendors we select employ the highest security and privacy safeguards to keep your data safe. Learn more about our third-party service providers here.
5. Do participants get notified about the recording?
As mentioned earlier, the laws around recording consent vary from region to region. However, even where laws are looser, it’s always more ethically sound to let participants know the meeting is being recorded.
Look for an AI meeting assistant that is loud and clear when it joins a call to record. Ideally, it will:
- Join as a visible attendee with an image that conveys it is recording
- Add a recording icon somewhere in the video conferencing interface
- Send a message in the chat that it has joined and has started recording
And, as good ethical practice, if meeting with someone for the first time, proactively mention that you’ve invited an AI meeting assistant to record the call.
6. Can you pause recording or redact sections after a call is over?
It should be just as easy to remove an AI meeting assistant from a meeting as it is to add it. If at any point a party prefers to stop the recording, it should only take a click or two to pause or stop the recording. It should also be clear and visible to all parties that the AI meeting assistant has left.
Fellow, for example, allows users to pause and resume recordings for personal or confidential parts of the conversation.
Sometimes, an AI meeting assistant will capture a piece of sensitive information that you wish it hadn’t. That’s why it’s important that there’s a way to make redactions to everything the AI note take produces, including the recording, transcription, and summary (coming soon to Fellow!). These redactions should be immediate, permanent, and visible to all parties. Not having the feature has both privacy and legal implications.
When assessing a new AI meeting assistant, ask both about how to pause and resume recording, as well as if the recording and transcript can be edited after the fact.
7. Do you have control over what is recorded?
It should be easy to see which meetings the AI meeting assistant will attend, as well as easy to toggle it on and off.
As well, you should be able to set rules in advance about which meetings it will attend. For example, you could have an automation that ensures the AI meeting assistant doesn’t attend meetings within your legal team.
In Fellow, users can create rules to determine which meetings Fellow joins, or even if Fellow should only join when manually invited.
8. Do you have control over who can see recordings?
Once you have recordings, you should also have full control over who can access them. You should be able to determine, for example, if meeting data and recordings can only be seen by the attendees, or if others can see them as well. For example, you may want your marketing team to be able to access recorded sales calls for valuable customer insights.
The best way to manage this is with permission automations that set rules for who can see which recordings. Also, look for an AI meeting assistant that allows you to create channels within your recording library so, for example, you can have an organization-wide channel for Town Halls, but then a channel for QBRs limited to only senior leadership.
Similarly, ensure you have control over whether or not meeting recordings and recaps are shared externally if any of the attendees were not from within your organization. The theme, again, is control — determining access should always remain in your hands.
In Fellow, users can set restrictions on who is able to view recordings, as well as create channels for recordings with access controls.
Choose a secure AI meeting assistant with confidence
AI meeting assistants are invaluable tools in fast-paced, data-driven workplaces. However, ensuring that they’re implemented securely is non-negotiable.
By choosing a platform with robust security measures — like Fellow — you can confidently adopt AI meeting assistants without exposing your organization to unnecessary risks. Fellow is compliant with SOC 2, GDPR, CCPA, and HIPAA and never allows our partner LLMs to train on your data.
As well, Fellow gives organizations full control over how long their data is stored, who can access recordings, and which meetings Fellow attends.
Ready to protect your meeting data with a secure AI meeting assistant? Get started with Fellow today.