fbpx

We previously discussed why new authentication (authN) and authorization (authZ) are needed and what happens when they are not implemented correctly. Now we will cover how to secure web APIs the right way.

Teams need to address three core elements to develop a simple yet scalable model for API security: safely managing logical state, support for distributed architectures built on containers and microservices. and enabling a web of authentication for linking loosely coupled services.

Modern tools and frameworks can address all three of these through the appropriate combination of the OAuth 2.0 Framework, OpenID Connect, and JSON Web Tokens (JWT).

Manage the logical state. Traditional web security evolved to simplify the user experience. Developers found a way to use session cookies for managing the authorization state of a user. This reduced frustration with having to log in repeatedly. Users only had to enter passwords once or longer if the credentials were stored. However, these session cookies were vulnerable to session hijacking attacks that take advantage of the limited security around cookies. A better practice is to securely manage the logical state using tokens.

Need for distributed authorization methods. Web applications assumed that one browser client would access one web application connected to one or more databases. The web page would be assembled in the middle and then passed to the user. But modern applications allow new architectures in which one client, like a mobile app, builds the user interface (UI) from multiple APIs. Each API, in turn, may manage interactions with multiple microservices. Early security frameworks like OAuth 1.0 supported direct access but did not scale for distributed architectures.

Web of authentication. Distributed security needs to strike the right balance between the number of authentication efforts on back end servers and the overhead and latency each call adds. In a complex authentication flow, the client authenticates to an initial service, then that service must authenticate with another back end service, and so forth until the request is completed. One strategy is to use public-key cryptography to allow each service to validate new requests locally using a chain of interconnected public-keys on top of OpenID Connect.

OAuth 2.0 Provides Distributed Authorization

As websites began to take off, so did the number of security schemes for simplifying access using session cookies.

In late 2006, Blaine Cook, the chief architect at Twitter, began dreaming up the framework for a more generic approach that could be shared across websites, which evolved into OAuth 1.0. Unfortunately, there were ambiguous elements that could be implemented differently, and there was quite a bit of disagreement between mainstream websites and enterprise vendors on how it would work.

One big challenge was that the authentication scheme was baked into the specification, making it hard to support applications like mobile or microservice design patterns. So, work began on OAuth 2.0 spec, which was more generic but also lacked support for a specific way to manage the security state. OAuth 2.0 only shares the goal of OAuth 2.0 and is not backward compatible.

OAuth does a better job separating the roles of security servers and authorization servers. It introduces the notion of a client, authorization server, resource server, and resource owner. This makes it easier to describe the authorization flow that can protect sessions from being hijacked and reduces the threat of business logic attacks on the back end server.

There was some contention with OAuth 2.0 in which vendors were implementing different versions of the draft standard. Major vendors started implementing OAuth 2.0 after draft 10, and then another 22 revisions were made. Different vendors adopted parts of these that would not interoperate. Eventually, the maintainers of the standard pulled out the conflicting pieces and renamed the protocol a framework. Other pieces were required to support authentication, tokens, and claims.

Adding Authentication With OpenID Connect

Many things were left out of OAuth 2.0 to build consensus, such as the token type and identity framework. OpenID Connect adds an interoperable protocol to OAuth 2.0. This complements OAuth’s extensive library of flows used to manage access for sharing resources across services.

The significant innovation is that developers can authenticate users without creating and maintaining a separate password file.  This improves security since these files are sometimes compromised. It is the third generation of technology. The first version was not widely adopted. The second version, OpenID 2.0, was more fleshed out but was difficult to implement since it relied on XML.

OpenID Connect is much simpler and takes advantage of JSON, making it more accessible to modern developers. Popular security libraries and development tools natively support OpenID Connect, which further simplifies implementation. OpenID 2.0 required a customer signature system that was problematic and prone to errors. OpenID Connect introduced JSON Web Tokens, which are much easier to implement.

Replacing Cookies With JWT

Around 2011 researchers began exploring how JSON could simplify web security in the same way it simplified APIs. John Bradley and Nat Sakimura introduced a simple signing mechanism for JSON, which evolved into the JWT framework spelled out in RFC 7519 in 2015. The core spec talks about representing “claims” digitally encoded inside a JSON payload as a token.

The token structure includes a header, payload, and signature. The header indicates the type of token and the signing algorithm. The payload includes the cryptographically signed claims. The signature is a hash generated by applying the sender’s private-key to the payload.

The tokens are used to encrypt data between parties in a way that hides it from others or for applying digital signatures that allow the recipient to validate the integrity of claims in a communication.  A claim is any statement issued by the appropriate source that can be cryptographically verified. Claims can be used to verify who issued the JWT, that the appropriate subject uses them, that they are delivered to the appropriate recipient, and when they expire. They may also include publicly registered claim names (i.e., Google) in a special JWT database or private claim names for use in a restricted flow.

JWT provides several benefits over token schemes like Simple Web Tokens (SWT) and Security Assertion Markup Language (SAML). SWT required symmetric security, which complicated the authorization flow. Both SAML and JWT can use public-key cryptography in which a pair of public/private keys can verify the source and hide the data. JWT is also more efficient than SAML, which reduces the overhead and packet sizes. JSON also aligns better with JSON API techniques. They are also easier to process.

Common Use Cases

The most common use case is authorization. After a user or service has been authenticated, subsequent communications can use the JWT to access services permitted for that user or service. It is commonly used as part of a single sign on implementation since it can be used across multiple domains.

Another common use case is for secure information exchange. In these cases, the JWT is used to sign and encrypt a transmission using a private/public key pair. The recipient can verify the source and that the data has not been tampered with by using the public-key and its own private-key to decode the message.

Scopes provide a way of limiting appropriate access to a subset of resources. For example, one scope would give you access to the free tier of a nifty customer relationship management (CRM) service, while another scope would provide access to all the extra features available on the gold tier. Scopes can also limit access based on who owns the data. For example, the scope could limit access to view all the enterprise’s customers stored in the CRM system but not see records created by others.

How to do it right

Application security techniques evolved to secure individuals accessing web applications. Traditional applications were designed to run on a stand-alone basis, and integrating across multiple back-end services was an afterthought.  But enterprises need to consider a different approach when securing applications built from a collection of APIs.

Doing authZ and authN right requires creating an infrastructure where users or services can be authorized in one place and then have their credentials forwarded to the various applications and APIs used to gather data on their behalf. It’s also important to think about how to automatically assign and manage roles for individuals and services that relate to the data they need access to rather than the services used to access it. A focus on protecting the data rather than the services makes it easier to extend security to new services without having to create additional rules.

Enterprises can code this infrastructure on their own by combining appropriate encryption libraries. But this can add additional overhead for maintaining and updating these components.

A much better practice is to combine industry-leading frameworks and tools such as OAuth 2.0 for authZ, OpenID Connect for authN, and JWT to implement encrypted tokens. The combination of these is well documented and can provide the best framework for protecting the API infrastructure. More importantly, enterprises can also benefit from the wide use of these tools as new threats are discovered and new best practices evolved.

Enterprises should also consider how to protect the business logic that operates across this infrastructure. Microservice architectures can expose more API endpoints to outsiders. Modern API observability tools like Traceable can provide another layer of protection at the business logic level that might be blocked by traditional authN and authZ tools.

About the Author

George Lawton is a technology writer and regular contributor to The Inside Trace.

View a recorded demo of Traceable Defense AI.