Privacy in AI Meetings: 7 Best Practices To Stay Safe

Secure your data and increase meeting productivity with privacy-conscious AI-powered tools.

By Alyssa Zacharias  •   May 28, 2024  •   5 min read

Before AI-powered notetakers, manual notetaking meant participants scribbled down or frantically typed out details, diverting their attention and decreasing comprehension. Now, AI automates tasks like transcribing, recording, and scheduling, allowing everyone to focus on the conversation. 

Despite these advantages, privacy in AI meetings remains a concern, bolstered by cases where enterprises like Zoom and OpenAI lose credibility over unclear data use. But there are AI meeting companions that strike a balance between productivity and privacy.

With meeting management solutions like the Fellow AI Meeting Copilot doing all the legwork—from creating action items to summarizing meeting notes—you’ll spend 17% less time in meetings, give hours back to employees, and increase efficiency in hybrid and remote work settings.

AI privacy concerns with platforms like Zoom

It’s natural to worry about the relationship between AI and privacy, especially after Zoom quietly changed its Terms of Service in March, 2023.

What happened?

Under the new terms, Zoom could use all forms of customer input—like audio, video, chats, and documents—to develop new products and enhance existing services. This includes the rights to share and modify data into analytics and transcripts for AI development and machine learning.

Zoom wasn’t completely forthright in making this change, so any user who didn’t read the new Terms of Service and continued using Zoom unknowingly granted the video conferencing company a perpetual, royalty-free license to their content.

What did Zoom do about it?

Following public outcry, Zoom updated its Terms of Service again, promising not to use customer data to train AI models without consent. Smita Hashim, Zoom’s Chief Product Officer, released a blog post clarifying that Zoom’s privacy policy is committed to using AI responsibly.

Despite these assurances, legal expert Sean Hogle criticized the revisions. He noted that the consent requirement only applies to “customer content” and not “service-generated data,” including usage data collected using Zoom’s services. This loophole means Zoom can still nonconsensually use data, suggesting the new terms do little to address privacy concerns.

What now?

This wasn’t the first time Zoom’s security issues caused concern. In 2021, Zoom settled a class action lawsuit for $85 million for sharing user data with companies like Google, Facebook, and LinkedIn without permission. Zoom also falsely claimed it provided end-to-end encryption.

Zoom’s history of failing to protect user data and uphold its commitments warrants across-the-board hesitation when using AI-powered tools. And these AI privacy issues aren’t limited to meeting-related tools—OpenAI’s ChatGPT and other generative AI companions have their fair share of shortcomings.

7 best practices for using AI safely in meetings

While you can’t control your conferencing platform’s decisions, here are seven best practices to follow to take meeting privacy into your own hands.

1Select an AI companion with extensive security features

Choose AI-powered meeting solutions like Fellow for a secure meeting experience. Fellow uses Amazon Web Services (AWS) for server hosting, ensuring high-level encryption. 

Fellow’s commitment to security also includes:

  • Monitoring all devices connected to its network
  • Complying with SOC II standards
  • Regularly checking for security vulnerabilities
  • Scanning its code for potential bugs before hackers exploit them
  • Hiring external experts to test its system’s security

Fellow makes sure you retain ownership of your data. You can assign admins, managers, or individual contributors and control each person’s access. Only invitees and those given access can see and edit AI-generated notes and other sensitive information.

2Limit access to AI meeting platforms

Restrict access to verified participants so unauthorized parties can’t enter your meetings. Turn multi-factor authentication (MFA) on for employee Google and Microsoft accounts and double-check your invite lists to keep unwanted people from listening in.

You should also: 

  • Use unique access passwords for each meeting
  • Screen attendees in waiting rooms before allowing them to join
  • Regularly review access logs for suspicious activity

3Train meeting participants on privacy best practices

Teach employees to use AI tools safely, and discuss what counts as sensitive information. Strong passwords are a must, with numbers, special characters, and upper and lowercase letters. And host regular seminars on data privacy to educate employees about the potential risks of oversharing personal information in AI-assisted meetings.

Here are a few things you might do to train your team:

  • Hold regular workshops on data privacy
  • Set rules on the information employees can and can’t share during team chats
  • Encourage employees to use secure, private networks for meetings

4Regularly monitor AI meeting platforms for unusual activity

Watch for abnormal activity within your meeting platform and AI companion interface, like unauthorized login attempts and unexpected data loss. The more proactive your IT department is, the faster they can detect suspicious behavior and execute risk management strategies. Ask the CTO to craft reporting and contingency plans for possible risks, like if the IT department finds an unusually high volume of data downloaded during non-working hours, to encourage this proactivity. 

You might also:

  • Regularly check the meeting platform’s security measures
  • Ask employees to keep an eye out for suspicious activity and report it

5Ensure AI meeting software is updated with the latest security

Always update your AI meeting solutions to the latest versions, which often have new security patches so software is less susceptible to bugs and hackers. While doing so, you might also find other handy features that improve safety and productivity, like Fellow’s Meeting Guidelines.

Ask your team to:

  • Turn on automatic updates for AI meeting tools
  • Schedule routine checks for software updates if auto-updates aren’t available

6Regularly review and update privacy policies

Assess privacy policy changes AI companies make and communicate these to employees often. This ensures your team understands how these companies use their data. Not only does this clarity improve team trust but it also gives employees the option to use AI tools or not.

Remember to also:

  • Check for general privacy regulation changes
  • Adjust your company’s internal privacy policies as industry best practices shift

7Only store essential data needed for AI meeting functionalities

Limiting the data AI collects reduces the risk and severity of security breaches. Store only what’s necessary for your meetings and workflows. For example, you can convert audio recordings into text summaries and delete the original files afterward rather than storing everything indefinitely.

Complete this step by: 

  • Checking for and enabling limited data collection in the AI tool’s settings
  • Encrypting stored data and restricting employee access

Protect your meetings with Fellow

Keeping conversations private without compromising productivity is challenging, especially when working over the cloud in a remote or hybrid setup. But Fellow is up to the challenge.

Featuring end-to-end encryption, privacy controls, and regularly updated security patches, Fellow ensures your meetings remain confidential. We crafted our AI meeting solution with privacy front-of-mind, so rest assured knowing your team’s data is protected.

Have safer, more productive meetings with Fellow.

  • shopfiy
  • uber
  • stanford university
  • survey monkey
  • arkose labs
  • getaround
  • motorola
  • university of michigan
  • webflow
  • gong
  • time doctor
  • top hat
  • global fashion group
  • 2U
  • lemonade
  • solace
  • motive
  • fanatics
  • gamesight
  • Vidyard Logo