23.4 C
Saturday, May 18, 2024

A 2023 Recap: The Path of Least Resistance Is Non-Human Access

Cyberattacks have been common in 2023, but non-human access appears to be a more common attack channel than the others. With an uncontrolled attack surface that is constantly expanding and 11 high-profile attacks in just 13 months, non-human identities are the new perimeter, and 2023 is just the start.

Why non-human access is a haven for cybercriminals

To obtain what they want, people are always looking for the simplest route, and cybercrime is no exception. The path of least resistance is what threat actors seek, and it appears that in 2023, non-user access credentials (API keys, tokens, service accounts, and secrets) were on this path.

“There are 50% of active access tokens linking Salesforce with outside apps that are not being used. GitHub and GCP have figures that approach 33%.”

Apps and resources are connected to other cloud services via these non-user access credentials. They lack the security safeguards that user credentials (MFA, SSO, or other IAM rules) have, and they are typically excessively permissive, uncontrolled, and never revoked, which makes them a veritable hacker’s paradise. As a matter of fact, half of the active access tokens that link Salesforce to external applications are inactive. In GCP and GitHub, the percentages rise to 33%.*

So how are these non-human access credentials exploited by cybercriminals? We must first comprehend the various forms of non-human access and identities in order to comprehend the assault vectors.

In order to improve agility and streamline operations, employees link third-party tools and services to essential business and engineering environments, such as Salesforce, Microsoft365, Slack, GitHub, and AWS, to create external non-human access. The third-party app or service (the non-human identity) owns the API keys, service accounts, OAuth tokens, and webhooks that are used to establish these relationships. Since freemium cloud services and bottom-up software adoption are becoming more popular, a large number of these connections are frequently established by various employees without any security governance and, worse, from unreliable sources. 90% of the apps linked to Google Workspace environments, according to Astrix data, are non-marketplace apps, which means they haven’t been approved by an official app store. The percentages reach 77% in Slack and 50% in Github.

“74% of Personal Access Tokens in GitHub environments have no expiration.”

Similar in nature, internal non-human access is established via internal access credentials, sometimes referred to as “secrets.” R&D teams are always creating secrets that link disparate services and resources. These secrets are frequently dispersed throughout several secret managers (vaults), making it impossible for the security team to know where they are, if they’re exposed, what they permit access to, or whether they’re configured incorrectly. As a matter of fact, in GitHub environments, 74% of Personal Access Tokens are perpetual. Similarly, 59% of GitHub’s webhooks are misconfigured, which means they aren’t assigned or secured.

High-profile attacks in 2023 that take use of non-human access

This is not a theoretical threat at all. Numerous well-known firms have suffered from non-human access exploits in 2023, affecting thousands of customers. In these types of assaults, the attackers leverage compromised or stolen credentials to get access to the most critical internal systems of the firm, or in the case of external access, to the environments of their clients (supply chain attacks). Among these well-known assaults are:

October 2023: Attackers gained access to Okta’s support case management system by using a service account that was compromised. As a result, files uploaded by several Okta customers as part of recent support cases were viewable by the attackers.

September 2023: Hackers took GitHub Personal Access Tokens (PAT) from the GitHub Dependabot. Then, using these tokens, unauthorized contributions were made to both private and public GitHub repositories under the name Dependabot.

Microsoft SAS Key (September 2023): Over 38TB of very sensitive data were leaked after an SAS token created on a Storage account was developed and published by Microsoft’s AI researchers was given complete access to the account. Attackers could have access to these permissions for nearly two years (!).

Slack GitHub Repositories (January 2023): A “limited” quantity of pilfered Slack employee tokens allowed threat actors to access Slack’s externally hosted GitHub repositories. They might download private code repositories from there.

CircleCI (January 2023): Malware that got past their antivirus program corrupted the PC of an engineering staff member. Threat actors were able to obtain and grab session tokens from the compromised system. Even in cases when two-factor authentication is used to secure accounts, threat actors can access them with the identical credentials thanks to stolen session tokens.

The effects of access to GenAI

“32% of GenAI apps connected to Google Workspace environments have very wide access permissions (read, write, delete).”

The widespread use of GenAI products and services exacerbates the problem of non-human access, as one might anticipate. In 2023, GenAI became incredibly popular, and this trend is probably just going to continue. Security leaders are already having sleepless nights about the security dangers associated with integrating frequently unvetted GenAI apps to corporate core systems, as evidenced by ChatGPT becoming the fastest growing app in history and the 1506% increase in downloads of AI-powered apps over the previous year. Astrix Research’s figures offer additional evidence supporting this attack surface: With read, write, and delete rights, 32% of GenAI applications linked to Google Workspace environments have extremely broad access.*

The industry is waking up to the risks associated with access to GenAI. The dangers associated with the widespread usage of GenAI tools and technology are described in a recent Gartner research titled “Emerging Tech: Top 4 Security Risks of GenAI”. According to the study, “The use of generative AI (GenAI) large language models (LLMs) and chat interfaces, especially connected to third-party solutions outside the organization firewall, represent a widening of attack surfaces and security threats to enterprises.”

Security needs to be a facilitator.

Security must support non-human access since it is a direct result of automation and cloud adoption, two positive trends that promote growth and efficiency. An approach to securing non-human identities and their access credentials is no longer a choice, since security executives always aim to be facilitators rather than inhibitors.

The probability of supply chain attacks, data breaches, and compliance violations is greatly increased by improperly secured non-human access, both external and internal. To safeguard this unstable attack surface and enable the organization to profit from automation and hyper-connectivity, security rules and automated methods to enforce them are essential.

Schedule a live demo of Astrix – a leader in non-human identity security

*Based on data gathered from enterprise environments of businesses employing 1000–10,000 people, Astrix Research

Related Articles


Please enter your comment!
Please enter your name here

Latest Articles