Is Microsoft Copilot a productivity booster or a cybersecurity risk?

Nov. 18, 2024
If left unaddressed, M365's default permissions and Copilot's discoverability can create major cybersecurity risks for organizations and their data.

Microsoft Copilot has emerged as a powerful tool for companies using M365. As a coding assistant, presentation designer, writing companion, and research assistant, it supports virtually every work task with remarkable speed and accuracy.

From a business perspective, Copilot's benefits make it a game-changing investment in productivity. With seamless integration across the Microsoft ecosystem, Copilot leverages organizational data to produce contextually accurate outputs that align with business needs, allowing tasks that once took minutes or hours to be completed in seconds.

Yet Copilot’s ability to surface and use information also poses major risks from a cybersecurity standpoint. The default permission settings within M365 services paired with Copilot make it a liability without some upfront work. These default settings are susceptible to manipulation and extremely easy to discover, making the potential for data theft via uncontrolled use of Copilot a significant liability for those unfamiliar with these potential pitfalls. For companies aiming to use Copilot as safely as possible, understanding its dangers is essential for ensuring the risks don’t outweigh its rewards.

M365 Default Permission Settings

Copilot's ability to surface organizational data from SharePoint Online, OneDrive for Business, and other Office Apps is a double-edged sword. On the one hand, it dramatically improves productivity by allowing users to avoid manually uploading data for tasks. For instance, an employee creating a weekly schedule can rely on Copilot to source information from Outlook emails and workflows to create a detailed plan. Similarly, an accountant working on a quarterly report can ask Copilot to pull and sort relevant data from various data sets, saving valuable time ordinarily reserved for manual research.

On the other hand, Copilot’s omnipresence across M365 can be exploited by those who understand the default security settings within M365 and advanced prompt engineering techniques. Many companies fail to realize that M365’s (and, by extension, Copilot's) out-of-the-box sharing, retrieval, and collaboration settings often have overly permissive configurations. As a result, any employee might inadvertently include confidential data in a presentation or public document, without necessarily realizing that Copilot pulled that data from what should have been a restricted source. Without the right security structure in place, an entry-level employee could access the confidential financial data reserved for executives and board members due to broad permission settings or a failure by staff to keep company permissions under control, which is, unfortunately, an easy trap to fall into.

On the other hand, Copilot’s omnipresence across M365 can be exploited by those who understand the default security settings within M365 and advanced prompt engineering techniques.

Organizations with a minimal understanding of security best practices might not consider converting these default settings to align with departmental needs or company hierarchy. However, wide access to sensitive data dramatically broadens the attack surface and the scope for breaches, making it harder to prevent unauthorized data sharing or mitigate risks from social engineering exploits.

How Threat Actors Exploit Copilot

There are multiple ways that threat actors can exploit the default permission settings for Copilot, both directly and indirectly. Unauthorized access to sensitive company data is the most obvious risk, as this data might fall into the hands of an unknowing employee or bad actor within the organization. For example, an employee might fall victim to a social engineering attack, where a threat actor pretends to be a coworker asking for specific information. Since the employee finds he or she can easily surface sensitive company information in copilot (via poorly configured M365 permissions), they may not think twice about its sensitivity and share it with the “user.” Data theft can also occur more deliberately; malicious employees might use Copilot to pull information to retrieve and share confidential info with competitors or hackers looking to exploit organizations.

Another major security concern with Copilot is when threat actors breach an employee's M365 account. Threat actors who hijack company accounts can launch internal spear-phishing attacks even if permissions are limited. Posing as legitimate employees, using information gleaned from Copilot via the compromised account, threat actors can discover and request sensitive company information from those with access to it.

Prompt engineering professionals can also abuse Copilot’s permission settings. While generative AI programs like Copilot are typically designed to block certain requests, advanced threat actors understand how to manipulate these systems. It’s also sobering that every major LLM on the market today contains jailbreaks – a method of getting the model to respond in undesired ways. Skilled threat actors know about these and will use them if possible. These risks underscore the multiple avenues of attacks that threat actors can use as a result of Copilot’s default settings.

Mitigating Copilot’s Risks

Copilot’s cybersecurity risks might seem like a major obstacle for companies aiming to keep their cybersecurity posture well aligned. However, multiple best practices that CISOs and administrators can follow to leverage Copilot’s productivity benefits while also reducing risks exist.

  • Reconfigure default permissions in M365 - Tech leaders and security officers should thoroughly review M365 default permission settings and configure them to align with the company's needs. This includes share settings for SharePoint Online, OneDrive for Business, and Microsoft Teams. Only grant employees access to specific data sets if it's essential to their job. This step is crucial for companies in highly regulated fields like finance and healthcare. A permission management solution should help by making it easier to surface weak permission settings, which the native tools in Microsoft 365 are particularly poor at.
  • Conduct regular cybersecurity audits -- Never assume that permissions remain locked after being set. Have security teams run regular audits to confirm that settings are aligned with the company's needs and that employees at specific levels have access to the data necessary for their specific roles.
  • Train employees on best practices - Train employees on maximizing Copilot’s capabilities for their role, while providing equal time on cybersecurity awareness training and mitigating its risks. Ensure employees are familiar with all known attack vectors, including internal and external threats, and regularly conduct training sessions for additional reinforcement.
  • Implement a zero-trust security model—Always operate under the guise that all requests are untrustworthy. All employee accounts should have password protection backed by multi-factor authentication (MFA) to prevent threat actors from gaining access to them.

If left unaddressed, M365's default permissions and Copilot's discoverability can create major cybersecurity risks for organizations and their data. However, these risks shouldn’t deter organizations from tapping into Copilot's massive productivity benefits. With proper permission management and a robust cybersecurity strategy, companies can fully leverage Copilot while shielding company data.

About the Author

Andy Syrewicze | Security Evangelist for Hornetsecurity,

Andy Syrewicze is a 20-plus-year IT pro specializing in M365, cloud technologies, security, and infrastructure. By day, he's a Security Evangelist for Hornetsecurity, leading technical content. By night, he shares his IT knowledge online or over a cold beer. He holds the Microsoft MVP award in Security.