Testing code that relies on remote APIs that themselves rely on OAuth can be painful, if not impossible. This is how we do it.
For all of its well-publicised issues, OAuth is a great step-forward for secure authentication / authorisation for granting access to remote APIs on by apps on behalf of users. We use OAuth to access LinkedIn data on behalf of our users in a number of ways, which is enormous benefit to both us and our users.
The great thing about OAuth from a security point of view is that the end user only ever gives their password to the identity provider. Which happens to be a royal PITA when it comes to testing, as it involves a complex 'dance' of HTTP requests and redirects, and inevitably involves interacting with services outside of your control.
We had some initial success with mocking, but in the end something a little les s sophisticated was called for. This is where we've got to - it seems to work for us, so I thought we'd share it.
- A valid test user account with the service provider (e.g. LinkedIn)
- A valid test application with the service provider
- Your choice of OAuth client library (we're a python shop, so we use
This is a screenshot of the LinkedIn application developer settings (NB they're
not real, so don't try using them!)
If you're lucky enough to be working with a language that has an interactive shell (e.g. Ruby, Python), then the next bit is best done in the shell. Otherwise - services like Apigee will help enormously.
Use the shell / service to determine your test user's 'access token' and 'access secret'. What these mean, and how you get them are beyond the scope of this post - I'm assuming if you're still reading that you know what these are; if not, here's a good starter from LinkedIn - http://developer.linkedin.com/documents/quick-start-guide
The net result of all this is you will end up with four pieces of information:
- Your application API key
- Your application API secret
- User access token
- User access secret
Having all of these will allow you to call the relevant API on behalf of the user. Save this information now - it's the foundation to the testing process below. Calling the API, using these token, will allow you to see and save the output of the API call.
As this is 'real' output, it can be used in testing, without having to call the API again. And this is where our approach to testing kicks in.
We have two classes of test - those that call the API directly, and those that use the output of API calls within the application. In those that do not call the API directly, we use the stored output from our manual testing above - it's just stored as JSON within the test itself - it is essentially a constant.
For one of our test cases - importing recommendations from LinkedIn, we have ten individual tests. Two of these require access to the API, the other eight use the static output. If the two tests fail, then we know that we need to update either the static content or the stored credentials. If the eight tests fail, the we know that something else has caused the regression.
In addition to this 'live' v 'static' distinction, we have a settings switch that allows us to run the tests when offline. By setting
DISABLE_ONLINE_ONLY_TESTS=True, the live API tests will be skipped, but the other eight will run as expected.
This code sample demonstrates the skeleton test case:
from unittest import TestCase
URL_GET_PROFILE = "http://api.linkedin.com/v1/people/~"
self.test_json = json.loads(
'"headline": "Senior Partner, Loblaw Law LLP",'
self.test_oauth = requests_oauthlib.OAuth1(
@skipIf(DISABLE_ONLINE_ONLY_TESTS is True, "")
"Confirm that the live API response is valid."
resp = requests.get(
# now you can use self.test_json with confidence
# this means that further tests can be run without
# referring back to the API.
x = do_something(self.test_json)
Testing like this has the dual benefit of being able to run tests offline, against some static test input, whilst also validating that input against the actual API output. If the API changes, it will be picked up; if any methods that use the API output cause regression problems, these will also be picked up.
Testing OAuth services is painful - so make it easy on yourself.