API Configuration

If you have diverse audiences for your API, you may wish to classify them into different types of partners and users with differentiation between the levels of business services they receive. The accounts plans mechanism allows you to create these types of tiers.
Note that this feature is only available on Pro plans and above.

At the end of this HowTo you will have created two different types of account plans with different services on offer and price points for API Partners.

After signing into your Administration Dashboard, navigate to the API > Account Plan area.

Once there, click on the new button to create a new account plan to add to the list (or edit an existing one).

On the subsequent screen you can name the account plan and configure basic information such as price and trial periods. This information is used only if billing is enabled within your system, otherwise the Account plan functions primarily as a label to differentiate the level of service to apply.

Click save on the account plan to create the new plan.

Once this complete, a new view is added to the edit view which is a list of features associated with a plan. Typically these are non technical features related to the level of service accounts on this plan will receive (e.g. 24/7 support, key account manager, etc.).

To add a new feature:

  • Click new feature button on the control panel
  • Enter A human readable name and a system_name for the feature.
  • Click Save.

The feature will now be visible on all account plans, clicking the cross or tick mark on the screen will disable or enable the feature for a given plan.

Account plan features have no special function within 3scale but can be used in order to determine portal behavior by the use of liquid tags. For logged in users, you can retrieve the account plan they are subscribed do by making a call to the following object and inserting conditional behaviour.:

it is also possible to iterate over the features associated with a plan in order to control behaviour. This is useful if you have many plans and include custom plans for some customers.

The above steps create the account plans and make it possible for you as the portal administrator to change an Account from one plan to another. However it does not make the plans visible to your portal users or allow them to switch plans.

In order to do this you need to publish both plans and generate an Account plan signup page – contact 3scale support for how to do this.

Contact us to help structure signup workflows in this scenario.

While Account plans provide a high level of control, they are not required to differentiate between levels of service. In most cases, you will use application plans to make this differentiation and create different bundles of rate limits for your API users. See the corresponding HowTo on Setting up Developer Plans.


To determine what can be done with your API you want to set the terms of engagement for applications which have access – this is done by provisioning Application plans.

Application plans are classes of access rights for the API which determine rate limits, which methods are accessible, which features are enabled etc. Every application accessing your API will be associated with an Application plan.

To see how to set up application plans, see the How To on Provisioning Developer Rate Limits.


Depending on your API you may need to use different authentication patterns to issue credentials for access to your API. These can range from API keys to openAuth tokens and custom configurations. This HowTo covers how to select between the standard Authentication Patterns Available.

3scale supports the following authentication patterns out of the box:

  • Standard API Keys: single randomized strings or hashes acting as an identifier and a secret token.
  • Application Identifier and Key pairs: immutable identifier and mutable secret key strings.
  • openAuth client Tokens (oAuth 1.0 and 2.0): openAuth client Identifier, client secret and referrer domain
    combinations.

In addition we support some more advanced, custom scenarios – please contact us for more information to meet your specific needs.

3scale also supports coupling issues credentials with IP address filtering or referrer domain filtering – see the extra section at the end of this HowTo.

By the time you complete this HowTo you’ll know how to set the authentication pattern on your API and the effect this has on applications communicating with your API.

Once in your Administration Dashboard, navigate to the API tab and select the service you wish to work on (there may be only one service named “API” in which case select this). Click on the Settings Link.

Note that each service you operate could use a different authentication pattern, but only one pattern can be used per service.

It is not recommended you change the authentication pattern once credentials have already been registered since behavior may be unpredictable. To change authentication patterns we recommend creating a new service and migrating customers.

Choose the required Authentication mode from the radio button menu in the bottom of the section.

Depending on the credential type chosen, you will likely need to accept different parameters in your API calls (key fields, IDs etc.). The names of these parameters need not be the same as those used internally at 3scale, your 3scale authentication will function correctly just as long as the correct parameter names are used in calls to the 3scale backend.

In order to test the credential sets are working you can create a new application to issue credentials to use the API. Navigate to the Accounts area of your dashboard, click on the account you wish to use and click the new application button.

Filling out the form and clicking save will create a new application with credentials to use the API. You can now use these credentials to make calls to your API and records will be checked against 3scale list of registered applications.

3scale supports the following authentication patterns out of the box.

The simplest form of credentials supported is the single API model. Here each application with permissions on the API has a single (unique) long character string something like this:

API Key = 853a76f7c8d5f4a1ee8bf10a4e0d1f13

The string acts as both an identifier and a secret token for use of the API. It is recommended such patterns are only used either in environments with low security requirements or with SSL security on API calls. The operations which can be carried out on the token and application are:

  • Application Suspend: this suspends the applications access to the API and in effect all calls to the API with the relevant key will be suspended
  • Application Resume: undoes the effect of an application suspend action
  • Key Regenerate: this action generates a new random string key for the application and associates it with the application. Immediately after this action is taken calls with the previous token will cease to be accepted.

The latter action can be triggered from the API Administration dashboard and (if permitted) from the API Developers User console.

Whereas the API Key Pattern combines the identity of the application and the secret usage token in one token, this pattern separates the two. Each application using the API issues an immutable initial identifier known as the Application ID (App ID). The App ID is constant and may or may not be secret. In addition each application may have 1-n Application Keys (App_Keys). Each Key is associated directly with the App_ID and should be treated as secret.

App Id = 80a4e03
App Key = a1ee8bf10a4e0d1f13853a76f7c8d5f4

In the default setting, developers are able to create up to 5 keys per application. This allow a developer to create a new key, add it to their code redeploy their application and then disable old keys – causing no application downtime as an API Key Regeneration would.

Note that statistics and rate limits are always kept at the application ID level of granularity and not per API Key. If a developer wishes to track two sets of statistics, they should create two applications rather than two keys.

It is also possible to change mode in the system and allow applications to be created without applications keys present. In this case the 3scale system will authenticate access based on the App ID only (and no key checks are made). This mode is useful for (for example) widget type scenarios or where rate limits are applied to users rather than applications. Note that in most cases you will want your API to enforce there to be at least one application key per application present (this setting can be found on the “Settings” menu item under usage rules).

openAuth is a set of specifications which enable a variety of different authentication patterns for APIs. The two major versions released so far (v1 and v2) are significantly different and version 2 includes a number of different supported authentication flows.

3scale supports all flavors of oAuth APIs with one overall openAuth authentication pattern based on. openAuth support is described in the oAuth HowTo.

If you have authentication needs which are not covered by this how to, let us know by email at support@3scale.net for information on what else we support.

In addition, 3scale supports coupling credentials with authorized IP ranges per key or referrer domains per key to restrict where specific API credentials can be used. You can enable this additional filtering by navigating to the API Tab and accessing the Settings screen on the API.


There are several different ways to add 3scale management to your API – including using a Varnish Proxy, Nginx proxy, a CDN, or code plugins. This HowTo drills down into how to use the code plugin method to get you setup.

By the time you complete this HowTo you will have been able to configure your API to use of the available 3scale code plugins to manage access traffic.

3scale API plugins are available for a variety of implementation languages including Java, Ruby, PHP, .NET and others – the full listing can be found in the code libraries section. The plugins provide a wrapper for the 3scale API to enable:

  • API Access Control
  • API Traffic Reporting

which connect back into the 3scale system to the policies, keys, rate limits and other controls that you can put in place via the interface – see the Hello World API QuickStart guide to see how to configure these elements.

Plugins are deployed with your API code to insert a traffic filter on all calls as shown in the figure.

Once you have your 3scale account, navigate to the code libraries section on this site and choose the plugin you plan to work with, click through to the code repository to get the bundle in the form that you need.

If your language is not supported or listed, let us know and we’ll let you know if there are any ongoing support efforts for your language. Alternatively you can connect directly to the 3scale API.

As described in the Hello World API QuickStart guide, you can configure multiple metrics and methods for your API on the API control panel. Each metric and method has a system name which will be required when configuring your plugin. You can find the metrics in the application plans area of your API.

For more advanced information on metrics, methods and rate-limits see the specific HowTo Guide on rate limits.

Armed with this information, returning to the code, add the downloaded code bundle to your application. This step varies for each type of plugin and the form that it takes depends upon the way each language framework uses libraries. For the purposes of this example we’ll proceed with PHP, instructions for other plugins are included in the README documentation of each plugin.

For PHP: Require the ThreeScaleClient.php file (assuming you placed the library somewhere within the
include path):

Then, create an instance of the client, giving it your provider API key:

Because the object is stateless, you can create just one and store it globally.

To authorize a particular application, call the `authorize` method passing it the application id and optionally the application key:

Then call the `isSuccess()` method on the returned object to see if the authorization was
successful:

If both provider and app id are valid, the response object contains additional information
about the status of the application:

If the plan has defined usage limits, the response contains details about the usage broken down by the metrics and usage limit periods.

If the authorization failed, the `getErrorCode()` returns system error code and `getErrorMessage()` human readable error description:

To report usage, use the `report` method. You can report multiple transaction at the same time:

The `“app_id”` and `“usage”` parameters are required. Additionaly, you can specify a timestamp
of the transaction:

The timestamp can be either an unix timestamp (as integer) or a string. The string has to be in a
format parseable by the [strtotime](http://php.net/manual/en/function.strtotime.php) function.
For example:

"2010-04-28 12:38:33 +0200"

If the timestamp is not in UTC, you have to specify a time offset. That’s the “+0200”
(two hours ahead of the Universal Coordinate Time) in the example above.

Then call the `isSuccess()` method on the returned response object to see if the report was
successful:

In case of error, the `getErrorCode()` returns system error code and `getErrorMessage()`
human readable error description:

(Note that as well as reporting traffic separately from authorizations this can be done in a single call to the AuthRep Method instead of report in the first instance.).

Once the required calls are added to your code you can deploy (ideally to a sandbox / testing environment) and make API calls to your endpoints. As the traffic reaches the API it will be filtered by the Plugin and keys checked against issues API credentials (refer to the Hello World API QuickStart for how to generate valid keys – a set will have also been created as sample data in your 3scale account).

To see if traffic is flowing, log into your API provider dashboard and navigate to the Monitoring tab – here you will see traffic reported via the plugin.


Once the Curiosity app is making calls to the API, they will become visible on the Statistics dashboard.

If you’re receiving errors in the connection you can also check the “Errors” menu item under Monitoring.

This HowTo describes the simple form of plugin use with Synchronous calls to the 3scale API, but there is also an asynchronous variation possible, as well as proxies with key caching via Varnish.


Filtering API traffic as before it reaches your application allows you to separate access control issues from your application and stop unwanted traffic reaching your application stack. This howto covers how to setup up Varnish as a 3scale powered API proxy to achieve just this.

At a end of this how to you will have setup up a Varnish instance on your server and connected it to 3scale’s traffic management system.

The architecture for API delivery via Varnish is as shown in Figure 1. No changes are required to application software in this deployment model and Varnish instances (one or many depending on the number of data centers or load balancing requirements) are deployed in front of the application.
The 3scale Varnish module (download via github) provides for a part of the cache space to be reserved for traffic control storage.

Once the 3scale varnish module is installed Varnish will check against 3scale for the validity of the API request. If positive, Varnish will serve the result from its cache or fetch the data from your origin server, finally Varnish will report to 3scale to maintain the consistent state and real-time analytics.

The connection between your local Varnish proxy and 3scale’s backend by default are done asynchronously using an eventual consistency model:

  • Each time a call reaches the API, Varnish first tries to authorize from cached data.
  • After the call response (and after serving the customer API call) a background call is triggered to 3scale to update the current policy status within the cache.

This configuration means that for cached keys, API traffic can be served uninterrupted by traffic management roundtrips and further, data in the cache is kept fresh using background calls which do not generate API client wait times.

For this configuration you require:

  • The latest version of Varnish (minimum V3.0 or above): http://www.varnish-cache.org.
  • The 3scale Varnish Module: https://github.com/3scale/libvmod-3scale/.

The module is configured and compiled as described in the module readme and once compiled, it is imported into your VCL configuration as follows (note that this is a sample only – see the bundle readme for precise instructions):

In addition a mapping file is required to determine which calls are checked against rate limits and reported to 3scale. A sample of the mapping file is provided in the module download in the VCL directory: Mapping File Sample.

Once Varnish is setup you’ll need to ensure that traffic for the API reaches Varnish before reaching your application.

Varnish can be used either in the way described above as a proxy filter for the API traffic or as a stand-alone out of band system. An out of Band configuration is shown in Figure 2. In this mode calls are made to the varnish instance as part of the API application or from some other load balancer and the traffic does not flow directly through Varnish.

The default Varnish setup here is concerned with added an authentication and reporting layer in front of the API and does not require the use of Varnish to actually cache API responses themselves. This is an independent choice to be made and if desired, standard cache controls described on the Varnish Web site can be used (http://www.varnish-cache.org).


You can greatly reduce the latency of authorization responses by using Varnish to cache your API requests between your App and the 3Scale backend.

At a end of this How-To you will have setup up a Varnish instance on your server and connected it to 3scale’s traffic management system.

  • You can already make authorization calls to the 3scale backend with a "client library":https://support.3scale.net/libraries or by "constructing your own HTTP calls":https://support.3scale.net/reference/activedocs
  • Your systems environment has all the prerequisites to allow you to install Varnish

In this mode the auth calls to 3scale go through Varnish. The full incoming API calls never go through Varnish. The auth calls to 3scale have to be made through Varnish from your application or from some other load balancer.

For this configuration you require:

  • The latest version of "Varnish - at least V3.0 or above":http://www.varnish-cache.org. You can follow the "guide on installing Varnish":https://www.varnish-cache.org/docs/3.0/installation/install.html.
  • The 3scale Varnish Module: https://github.com/3scale/libvmod-3scale/.

The module is configured and compiled as described in the "module readme":https://github.com/3scale/libvmod-3scale/blob/master/README and once compiled, it is imported into your VCL configuration as follows (note that this is a sample only – see the bundle readme for precise instructions). Use the default simple version of module: default_3scale_simple.vcl.

Note: You should start using Varnish with this simple VCL. Once everything is fully tested and working fine, then you can do the customizations for a more complex Varnish set-up. This is really important to help you with troubleshooting in case of any problems with the initial connection to 3scale.

Refer to the Starting Varnish documentation. Mind that, you have to change the .vcl file path to match your .vcl file (e.g. ~/dev/varnish/libvmod-3scale/vcl/default_3scale_simple.vcl). Be sure, that your auth calls hit varnish rather than 3scale directly (e.g. localhost if you run varnish there ).

To check if Varnish is running correctly run the varnishlog:
varnishlog

In the log you will see every call going through Varnish (more info in the Logging to Varnish documentation).

To benchmark it using apache benchmark, get the url from "Active Docs":https://support.3scale.net/reference/activedocs curl command, then transform it into something like:
The host su1.3scale.net is replaced with 127.0.0.1:8080, which is where Varnish is running (-a option on varnishd command), you should see all logs fine, and the stats for the app chosen.

Once Varnish is setup you’ll need to ensure that you filter your API traffic to strip off the keys and to make the auth (or more usually authrep) calls to 3scale through Varnish. Be sure, that your auth calls hit varnish rather than 3scale directly (e.g. localhost if you run varnish there ).


This HowTo shows the necessary steps to setup the integration with 3scale's management platform by means of a proxy. The proxy mode allows integration with 3scale without having to touch source code of your API or having to re-deploy your API.

The proxy has two modes of operation:

  • Sandbox mode: in this case 3scale hosts the proxy for you in the cloud. This mode is meant to be used in testing and development environments only. The support policy for the sandbox proxy is best-effort and no SLA is guaranteed, therefore, it cannot be used in production environments.

  • On-premise mode: Once you have configured your proxy on the sandbox you will be able to download the configuration files to run your own proxy on-premise. The local proxy will behave exactly the same way as the sandbox proxy and no further configuration will be required to launch your API. The on-premise mode is the intended mode of operation for production environments.

Getting API traffic in 1 minute

Below you can find a screenshot of the proxy configuration page. You can access it from the API > Settings > Integration section of your 3scale admin portal.

Proxy Configuration Page

Step 1: Declare your API backend

The API backend is the endpoint host of your API. For instance, if you were Twitter the API backend would be http://api.twitter.com/, or if you are the owner of the Sentiment API it would be http://api-sentiment.3scale.net/.

The proxy will redirect all traffic from your sandbox development endpoint to your API backend after all authentication, authorization, rate limits and statistics have been processed.

Step 2: Turn ON the sandbox proxy

To turn the sandbox proxy switch to on you first have to save the settings (API backend, Advanced settings etc.) by clicking the Save button in the lower right part of the page. Now turning on the proxy will be possible. This process will deploy your proxy configuration (at this stage the default configuration) to 3scale's hosted sandbox proxy.

Step 3: Get a set of sample credentials

Go to the Applications tab and copy the credentials (the keys) of any of your API users. If you do not have users yet you can create an application yourself (from the details page of any individual developer account), the credentials will be generated automatically.

Typically the credentials will be a user_key or the pair app_id/app_key depending on which authentication mode you are in (note that the sandbox does not currently support Oauth, though you can configure that in the on-premise config files, or alternatively use Varnish or the plug-in integration approaches).

Step 4: Get a working request to your API

We are almost ready to roll. Go to your browser (or command-line curl) and do a request to your own API to check that everything is working on your end.

For instance it could be something like this:

http://api-sentiment.3scale.net/v1/word/awesome.json

Note that you are not using 3scale's proxy yet. You are just getting the working example that will be used in the next step.

Step 5: Closing the circle

Now do the same request but replacing your API backend hostname (in the example api-sentiment.3scale.net) by your sandbox endpoint (e.g. if you would be twitter, change the http://api.244.... to http://api.twitter.com/). You also have to add the parameters to pass the credentials that you just copied.

Continuing the example in this step-by-step guide it would be something like:

http://api.2445579856672.proxy.3scale.net/v1/word/awesome.json?app_id=YOUR_USER_APP_ID&app_key=YOUR_USER_APP_KEY

If you execute the request you will get the same result as in step 4. However, this time the request has gone through the 3scale sandbox proxy.

And that's it! You have your API integrated with 3scale.

3scale's sandbox proxy does the validation of the credentials and applies any proxy rules that you have defined to handle rate-limits, quotas and analytics. If you did not touch the mapping rules every request to the proxy will increase the metric hits by 1, you can check in your admin consle how the metric hits is increased.

If you want to experiment further, you can test what happens if you try credentials that do not exist. The proxy will respond with a generic error message (you can define your custom one).

You could also define a rate limit of 1 request per minute. After you try your second request within the same minute you will see that the request never reached your API backend. The proxy stopped the request because it was in violation of the quota that you just setup.

Proxy Basics

Now that you have got your proxy up and running it is worth reading the rest of the HowTo to learn more about the basic configuration options.

For more advanced use cases you can also check the Extended Proxy HowTo that describes the configuration options under the Advanced Settings tab.

Endpoints

  • What is your API backend? The API backend is the endpoint of your API. It is where the proxy will forward the requests that it receives.

    The API backend can also be HTTPS, in this case you just use the appropriate protocol and port, e.g. https://api-sentiment.3scale.net:443

  • What is the sandbox endpoint? The sandbox endpoint is where your internal developers will send the requests to your API. This applies to the sandbox mode only, in the on-premise mode you can have your custom developer endpoint.

    The sandbox endpoint is set by 3scale and cannot be changed, nor is it available on HTTPS, since it not for production. Naturally, when using on-premise, custom domains and HTTPS are fully available.

Host Rewrite

This option is only needed for those API backends that reject traffic unless the Host header matches the expected one. In these cases, having a proxy in front of your API backend will cause problems since the Host will be the developer endpoint, e.g. xxx.yyy.proxy.3scale.net

To avoid this issue you can define the Host your API backend expects here

Host Rewrite

and the sandbox proxy will set the header.

Deployment Cycle

You can switch the sandbox proxy on and off by clicking the Sandbox Proxy slider.

Save will save your current configuration. However, changes are not deployed to the sandbox proxy automatically, you must deploy explicitly.

Save and deploy will save your changes and deploy them to the sandbox proxy. The system will notify you if there was any error during deploy.

Deployment History

Every time you deploy, the current configuration will be deployed to 3scale's sandbox proxy. From that point on, the API requests will be handled by the new configuration you deployed.

Note that it is not possible to automatically roll-back to previous deploys. Instead we provide a history of all your deploys with the associated configuration files. These files can be used to check what configuration you had deployed at any moment on time. If you want to, you can recreate any deployments manually.

Mapping rules

By default we start with a very simple mapping rule, Proxy Mapping Rules

This rule says, that any GET request that starts with "/" will increment the metric hits by 1. Most likely you will remove this rule since it is too generic.

The mapping rules define which metrics (and methods) you want to report depending on the requests to your API. For instance, below you can see the rules for the Sentiment API that serves us as an example:

Proxy Mapping Rules

The matching of the rules is done by prefix and can be arbitrarily complex (notation follows Swagger and Active Docs)

  • You can do a match on the path over a literal string:

    /v1/word/hello.json

  • Mapping rules can contain named wildcards:

    /v1/word/{word}.json

    This rule will match anything in the placeholder {word}, making requests like /v1/word/awesome.json match the rule.

    Wildcards can appear between slashes or between slash and dot.

  • Mapping rules can also include parameters on the query string or in the body:

    /v1/word/{word}.json?value={value}

    Both POST and GET requests follow the same notation. The proxy will try to fetch from the query string when it's a GET and from the body when it's a POST, DELETE, PUT.

    Parameters can also have named wildcards.

Note that all mapping rules are evaluated. There is no precedence (order does not matter). If two different rules increment the same metric by one, and the two rules are matched, the metric will be incremented by two.

In the figure above you see that the rule /v1 will always be matched for requests whose path starts with /v1 regardless if they are /v1/word or /v1/sentence

Also following the example, a GET request to /v1/word/super_expensive_word.json will match 3 rules: the first rule will increment the metric word by 1, the 4th rule will increment the metric version_1 by 1, and the 5th rule will increment the metric word by 42, so at the end, word will be incremented by 43.

Mapping rules workflow

The intended workflow to define mapping rules is as follows:

  • You can add new rules clicking the Create Rule button. Then you select an HTTP method, a pattern, a metric (or method) and finally its increment. When you are done, click save.

  • Mapping rules will be grayed out on the next reload to prevent accidental modifications.

  • To edit an existing mapping rule you must enable it first by clicking the pencil icon on the right. To delete it there is the trash icon. Edits and modifications (deletions also) will be saved when you hit the save button.

Running your proxy On-Premise (aka Production)

Once you have setup your proxy and you have achieved the desired behavior you can run it locally on your own servers (on-premise).

3scale automatically generates all the files needed to use Nginx as your API proxy. Nginx is a very powerful open-source web-server/proxy that is used in production in thousands of companies, 3scale being one of them.

If you are familiar with Nginx it will take 1 minute to have your proxy running locally. Note that your Nginx installation must have the Lua plugin.

If you are not familiar with Nginx we recommend that you install the fanstastic OpenResty web application that is basically a bundle of the standard Nginx core with almost all the 3rd party Nginx modules built-in.

These are the details on how to get your API proxy (based on Nginx) up and running.

Step 1: Install the dependencies (for Ubuntu)

For Debian/Ubuntu linux distribution you should install the following packages using apt-get:

For different systems check out the OpenResty documentation.

Step 2: Compile and install Nginx

Download the code and compile it, change VERSION with your desired version (we run 1.2.3.8)

At this point, we have Nginx + LUA installed via the excellent OpenResty bundle.

Step 3: Download your proxy configuration from 3scale

Download the proxy configuration files from 3scale by clicking the Download button. This will give you a zip file with two files inside:

  • The .conf is a typical Nginx config file. Feel free to edit it or to copy paste it to your existing .conf if you are already running Nginx.
  • The .lua file contains the logic that you defined on the web interface. Obviously you can modify the file to add new features or to handle custom requirements that are not supported by 3scale's proxy web interface.

Before going ahead, there are two things that you have to modify from the .conf file:

  • You should change the server directive from your sandbox endpoint (typically xxx.yyy.proxy.3scale.net) to your new developer frontend on your own domain.

    There is no need to define the server directive if you have only one domain. If server_name is defined, only the requests to that domain will be processed by Nginx. You must either change it or remove it.

    Furthermore if you are running multiple services within 3scale each service has its own domain, so you must change them all.

  • You must specify the location of your .lua file in your filesystem. Warning! The .lua file must be accessible by the user running the nginx worker processes (typically www-data on ubuntu, nobody on Mac OS X). otherwise the nginx workers will not be able to load the file when processing your incoming API requests.

    access_by_lua_file /PATH/YOUR-LUA-FILE.lua;
The .conf has reminders of the lines that you must change, you can do a search on "CHANGE" to find all the lines that should be modified.

Step 4: Start and stop your API proxy

The only thing left is to start the Nginx based API proxy. There are many ways, the most straight-forward is:

The example assumes that the working directory -p of nginx is /opt/openresty/nginx which is the path we passed during the installation to configure --prefix=/opt/openresty. You can change it but be aware of the user privileges.

The example also assumes that the .conf generated by 3scale is placed at /opt/openresty/nginx/conf/. Naturally, you should place the files and the directories at the location that best suits your production environment as well as to start and stop the process as a system daemon instead of by executing the binary directly.

To stop a running nginx:

The option -s let you pass a signal to nginx. The process that will be stop is the one whose pid is stored in /opt/openresty/nginx/logs/nginx.pid.

The logs of nginx are by default in the same directory /opt/openresty/nginx/logs/ It is highly advisable to check the error.log when setting the whole process.

Troubleshooting

This section covers some basic troubleshooting steps and solutions for common errors:

  • Error message in the admin console when configuring the sandbox proxy: "Sorry, protected domain" - if your API backend is in the same Amazon Region as the 3scale sandbox proxy, you need to provide an IP address for the backend instead of the host name
  • Error code 404 returned in API calls to the sandbox proxy - check the behavior when you call your API backend directly (frequently the backend is not running or is configured incorrectly)
  • Error code 404 or 502 (bad gateway) or message returned in API calls to the sandbox proxy: "Sorry, we have either encountered a problem or are currently undertaking server maintenance. Normal service will be resumed as soon as possible" - if your app runs on a hosted platfrom, check if you require a host rewrite in the advanced options

The last thing to check before contacting 3scale support is the debugging section of the advanced guide. If you do need to contact us, please confirm you have followed the troubleshooting steps, and if you have tried the host rewrite.


This is a step-by-step guide to deploy Nginx on your own server and have it ready to be used as a 3scale API proxy.

The 3scale API proxy requires some external modules for Nginx. Even though it is possible to compile it with these modules from source, we recommend using Openresty. It is an excellent bundle that already includes all necessary requirements.

This guide covers the setup steps for Ubuntu/Debian. The dependencies for other Linux versions are well documented in the Openresty installation guide.

Start by installing the necessary system dependencies and libraries:

sudo apt-get install libreadline-dev libncurses5-dev libpcre3-dev libssl-dev perl make

Check which is the latest stable version of Openresty here.

Download Openresty (replace the version number with the one corresponding to the latest stable version):

wget http://openresty.org/download/ngx_openresty-1.4.3.6.tar.gz

Run the configure step before compilation

./configure --with-luajit --with-http_iconv_module -j2

This will set the environment so that Nginx will be installed in the following path: /usr/local/openresty/

We recommend using the default path, but in case you would need to change that you can use the --prefix=PATH option when invoking the configure step. Keep in mind that in that case some of the instructions in this document might be slightly different for you.

Build and install:

sudo make
sudo make install

To transform Nginx into a 3scale API proxy ready-to-use for your API you just need to download your configuration files from your 3scale admin portal.

If you haven't yet configured your API endpoints in 3scale, do it now. Take a look at the Hello World Nginx guide to learn how to do it.

When you are done setting up your API in 3scale, head over to your admin portal. Go into the API Integration section and click on Download Nginx Config. This will give you a zip file with two files inside:

  • The .conf is a typical Nginx config file. Feel free to edit it or to copy paste it to your existing .conf if you are already running Nginx.
  • The .lua file contains the logic that you defined on the web interface. Obviously you can modify the file to add new features or to handle custom requirements that are not supported by 3scale's proxy web interface.

Before going ahead, there are two things that you have to modify from the .conf file:

  • You should change the server directive from your sandbox endpoint (typically xxx.yyy.proxy.3scale.net) to your new developer frontend on your own domain.
server {
    listen 80;
    server_name XXX.YYY.proxy.3scale.net;
    underscores_in_headers on;
    ...
}

There is no need to define the server directive if you have only one domain. If server_name is defined, only the requests to that domain will be processed by Nginx. You must either change it or remove it.

  • Furthermore if you are running multiple services within 3scale each service has its own domain, so you must change them all.

You must specify the location of your .lua file in your filesystem.

access_by_lua_file /PATH/YOUR-LUA-FILE.lua;

The .conf has reminders of the lines that you must change, you can do a search on "CHANGE" to find all the lines that should be modified.

Now, if you installed Nginx at the default location, you should copy these files to the /usr/local/openresty/nginx/conf/ directory.

Starting Nginx:

sudo /usr/local/openresty/nginx/sbin/nginx -c /usr/local/openresty/nginx/conf/

You can do all the other operations by appending options to the previous command:

  • stopping Nginx: -s stop
  • restart Nginx: -s reload
  • run a syntax error test on the configuration file: -t

Instead of having to type the full path to the executable and configuration files every time, Nginx can be configured to be operated through the Linux service command.

To do so, you should create an init.d script for Nginx. This is a script that describes the environment in which Nginx will be run: location of the binary, the configuration and logs, and several other variables.

You can get an init.d script for Openresty from here: https://gist.github.com/vdel26/8805927.

Copy that script to the /etc/init.rd/ directory. Edit the file if necessary to make sure that the CONF variable points to the right configuration file. By default it expects the file to be named nginx.conf so change this variable if your configuration file has a different name.

In case you have installed Nginx to a different location than /usr/local/openresty (the default), you will also need to edit the PREFIX variable so that it points to the right location.

Once you are done editing the file, run the following command to set it up:

sudo update-rc.d nginx defaults

Now you will be able to invoke the usual operations in Nginx but in a more convenient way.

  • starting Nginx: sudo service nginx start
  • stopping Nginx: sudo service nginx stop
  • restart Nginx: sudo service nginx restart
  • run a syntax test on the configuration file: sudo service nginx test


There are times when you are probably switching between different configuration files. This is a common situation when you are troubleshooting an issue or simply adding new directives to your configuration.

We recommend using symbolic links to make such scenario less cumbersome. You can keep different versions of your configurations in separate directories and then link them to the place where Nginx expects the configuration file to be.

Then when changing the version of the configuration currently running you will only need to change the link and restart Nginx.

For example:

sudo ln -s /home/ubuntu/My-Nginx-Configs/v2/nginx-v2.conf /usr/local/openresty/nginx/conf/nginx.conf

Keep in mind that you will also have to link the Lua file that holds the other part of your 3scale configuration:

sudo ln -s /home/ubuntu/My-Nginx-Configs/v2/nginx-v2.lua /usr/local/openresty/nginx/conf/nginx.lua


The easiest and most powerful way to integrate your API with 3scale is using our Nginx-based API proxy. To make integrating your API with 3scale even easier we provide the proxy available as an AMI in the AWS Marketplace.

This is a zero-setup solution that will get you up & running with your traffic going through Nginx and using 3scale in a matter of minutes.

3scale AMI listing

The AMI contains:

  • a preinstalled Openresty bundle, including Nginx and complementary modules (such as the Lua scripting support).
  • a helper command line tool to get the Nginx configuration generated by 3scale for your API.
  • you will need an AWS account.
  • you should have configured your API details in 3scale beforehand to be able to automatically generate your Nginx configuration. In case you are not sure, read how to configure your API in this tutorial.

Launching your own proxy instance using the AMI

  1. go to the 3scale AMI page in the AWS Marketplace
  2. you have two options to launch the AMI: 1-click launch or through the EC2 console. Pick the 1-click launch since it is the simplest way.
  3. Using the 1-click launch option, these are the settings where you will need to make a choice (go with the defaults for all the others unless you have good reasons to change them):
    • AWS region
    • EC2 instance type
    • Key pair (very important – pick a key pair for which you have the corresponding private key available in your computer, otherwise you won't be able to access the instance)
  4. click the Launch with 1-Click button.
  5. now the your instance of the AMI is being started, it will be ready in about 2 minutes.
  6. head over to the your AWS Management Console and go into the running instances list on the EC2 section.
  7. check that your instance is ready to be accessed. That is indicated by a green check mark icon in the column named Status Checks.
  8. click on over the instance the list to find its public DNS and copy it
  9. log in through SSH using the ubuntu user and the private key you chose before. The command will look more or less like:
    ssh -i privateKey.pem ubuntu@ec2-12-34-56-78.compute-1.amazonaws.com
  10. once you log in, read the instructions that will be printed to the screen: all the necessary commands to manage your proxy are described there. In case you want to read them later, these instructions are located in a file named 3SCALE_README in the home directory.

Downloading your configuration from 3scale

The fastest way to get your Nginx configuration files from 3scale is by using the command line tool included in the AMI. You just need to run the following command:

download-3scale-config

You will be prompted to enter your 3scale admin domain (e.g. mycompany-admin.3scale.net) and your provider key. You will also be asked for the directory where you want the files to be downloaded: if you simply click enter and they will be downloaded to /home/ubuntu/3scale-nginx-conf

The tool will save your credentials locally, so that if you make changes to your configuration (for example when you add a new endpoint mapping) you can just run the command without entering them again.

In case you need to want to be prompted again for your credentials you can the command with the reset option:

download-3scale-config --reset

Starting Nginx

You can start running API proxy now with your own configuration! Assuming you downloaded it to the default location, the command you will need to enter is:

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /home/ubuntu/3scale-nginx-conf/YOUR-CONFIG-FILE.conf

You will find other useful commands to operate Nginx in the 3SCALE_README document.

To stop the proxy:

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /home/ubuntu/3scale-nginx-conf/YOUR-CONFIG-FILE.conf -s stop

To reload it (useful after you have made changes to the configuration):

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /home/ubuntu/3scale-nginx-conf/YOUR-CONFIG-FILE.conf -s reload

If you like the AMI, please leave a 5-star review in the AWS Marketplace listing.
In case you experience any problem, let us know at support@3scale.net.

Creating and testing new versions of your Nginx configuration in a remote server can be quite cumbersome.

In this other document you will find tips to make that process easier, including adding Nginx as an system service and some advice on managing multiple versions of your configuration files.

Most errors in the proxy configuration can be detected and solved by looking at the Nginx logs:

  • access log: /opt/openresty/nginx/logs/access.log
  • error log: /opt/openresty/nginx/logs/error.log

You can find more information about Nginx in the official documentation page: http://nginx.org/en/docs/



This section covers the advanced settings option of 3scale's sandbox proxy.

For security reasons any request from 3scale's proxy to your API backend will contain a header called X-3scale-proxy-secret-token. The value of this header can be set by you here

Proxy secret token

Setting the secret token will act as a shared secret between the proxy and your API so that you can block all API requests that do not come from the proxy if you so wish. This gives an extra layer of security to protect your public endpoint while you are in the process of setting up your traffic management policies with the sanxbox proxy.

Your API backend must have a public resolvable domain for the proxy to work, thus anyone that might know your API backend could bypass the credentials checking. Because the sandbox proxy is not meant for production environments that should not be a problem, but it's always better to have a fence available.

The API credentials within 3scale are always user_key or app_id/app_key depending on the authentication mode you are in (Oauth is not available for the sandbox proxy). However, you might now want to use those credential names on your API. In this case you will need to set custom names for the credentials:

Custom user_key

Custom app_key/app_id

For instance you could rename app_id to key if that fits better your API. The proxy will take the name key and convert it to app_id before doing the authorize call to 3scale's backend. Note that the new credential name has to be alphanumeric.

You can decide whether your API passes credentials in the query string (or body if not a GET) or in the headers.

Proxy Credentials Location

Another important aspect to have a full-fledged configuration is to define your own custom error messages.

It is important to remark that 3scale's sandbox proxy will do a pass through of any error message generated by your API. However, because the management layer of your API is now carried out by the proxy there are some errors that your API will never see since such requests will be terminated by the proxy.

Custom Error Messages

These errors are the following:

  • Over limit: this error will be generated whenever an API request is above its allowed quota.
  • Auth failed: this error will be generated whenever an API request does not contain valid credentials. This can be because the credentials are fake, because the application has been temporarily suspended, etc.
  • Auth missing: this error will be generated whenever an API request does not contain any credentials. This occurs when users forget to add their credentials to an API request.
  • No match: this error means that the request did not match any mapping rule, therefore no metric is updated. This is not necessary an error but it means that either the user is trying random paths or that your mapping rules do not cover legitimate cases.

Setting up the proxy configuration is easy, but still some errors can occurr on the way. For those cases the proxy can return some useful debug information that will be helpful to track down what is going on.

To enable the debug mode on 3scale's sandbox proxy you can set the header

with your provider key to a request to your proxy. When the header is found and the provider key is valid the proxy will add the following information to the response headers:

Basically X-3scale-matched-rules tells you which mapping rules have been activated by the request, note that it is a list. The header X-3scale-usage tells you the usage that will be reported to 3scale's backend. And finally X-3scale-credentials returns the credentials that have been passed to 3scale's backend.

In the case that the current configuration is not working but previous configuration were ok you can always go to the history of deployments. Every time you deploy the Lua configuration file is saved. You can always check in the functions extract_usage_x() where x is your service_id. In the extract_usage functions you can see the logic of your mapping rules,

the comment -- rule: /v1/word/{word}.json -- shows to what particular rule the Lua code refers to. Each rule has a Lua snippet like the one above. In case you were wondering comments in Lua are -- , --[[, ]]-- and in nginx are #

Unfortunately there is no automatic rollback. So you will have to read the Lua file if you need to know what you had deployed in the past.

3scale's sandbox proxy is quite flexible, but there are always things that cannot be done, either because the console interface does not allow it or because of security reasons due to a multi-tenant proxy.

If you need to extend your API proxy you can always download the proxy configuration and run it locally on your own servers. See the on-premise section in the Basic Howto.

Needless to say that when you are running the proxy on-premise (on your own servers) you can modify the file to accomodate any custom feature you might need. Nginx with Lua is an extremely powerful open-source piece of technology.

We have written a blog post explaining how to augment APIs with Nginx and Lua. Some examples of extensions that can be done:

  • Basic DoS protection: white-lists, black-lists, rate-limiting at the second level.

  • Define arbitrarily complex mapping rules.

  • API rewrite rules, e.g. you might want API requests starting with /v1/* to be rewritten to /versions/1/* when they hit your API backend.

  • Content filtering, you can add checks on the content of the requests, either for security or to filter out undesired side effects.

  • Content rewrites, you can transform the results of your API.

  • Many, many more. Combining the power and flexibility of Nginx with Lua scripting is a winner combination.

Over time we will add recipes on how to achieve such extensions for your on-premise proxy. If you are ready for production you can always ping 3scale's support forums

Finally, one last note. If you use the configuration files in your on-premise proxy, there is the function log() function you can call during the process that will print the arguments in the nginx error.log file.


3scale offers a framework to create interactive documentation for your API just like the interactive documentation of 3scale APIs.

With 3scale’s Active Docs (based on the Swagger) you will have functional and eye-candy documentation for your API. The interactive documentation will help your developers to explore, to test and to integrate with your API.

Every minute spent making your developers happy is a minute well invested on your API :)

At the end of the section you will have setup the Active Docs for your API.

Click on the “API” > “Active Docs” tab in your control panel. This will lead you to the list of your Service Specs (initially empty).

List of Service Specs in Active Docs

You can add as many Service Specs as you desire, typically, each Service Specs corresponds to one of your APIs. For instance, at 3scale we have four different specs, one for each API of 3scale : Service Management, Account Management, Analytics and Billing.

When you add a new Service Spec, you will have to provide: name; a system_name (required to reference the Service Spec from the portal), whether you want the spec to be public or not; a description that is only meant for your own consumption, and finally you must provide the APIJSONspec that you can see in the figure below.

NOTE – the APIJSON spec is the “secret” ingredient of the whole Active Docs.

You must generate the specification of your API according to the spec proposed by Swagger . In this HowTo we assume that you already have a valid Swagger-compliant specification of your API.

Please check our HowTo on the Active Docs API Specification to learn more about how to generate such specification for your API.

Create a Service Spec for Active Docs

Once you created the first API on the Active Docs by adding a Service Spec, you can see it listed on the “API >> Active Docs” tab.

You can edit it whenever necessary, delete it or switch them from public to private.

List of Service Specs for Active Docs

You can also preview how the Active Docs looks like by clicking on the name you gave to the Service Spec (in the example we called it Sentiment API). You can do this even if the spec is not public yet.

This is what your Active Doc will look like!

Preview an Active Doc

Once you are happy with your Active Doc, it’s time to make it public and link it on your Developer Portal so that it can be used by your API developers.

For that purpose, you will have to add the following snippet in the content of any page of your Developer Portal

This must be done via the CMS of your Developer Portal.

(Note – This snippet requires jQuery, which typically is already included on the Main Layout on your Developer Portal. If you remove it from there make sure you add the dependency on jQuery in here)

And that’s it, simple enough isn’t it? :-)

What happens if you have more than one Service Spec? Can I put different Service Specs in different pages?

Yes, of course. The init([]) controls which Service Specs are loaded and rendered: [] loads them all, ['sentiment-api','another_api'] would only show the Service Specs whose system name are in the array. With one exception! The Service Specs must be public before they are available on your Developer Portal, and when you create a Service Spec it starts as private (hidden state).

Active Docs are a great tool for your developers to test your API, it eases their integration and provides them with a very convenient way to explore and learn how to get most out of your API.

However an API evolves in multiple ways:

One is with versions, which is not supported straight with Active Docs or Swagger (however, remember that you can have multiple Service Specs, on on each version and display them on different pages.)

The other way is when new operations are available, or the same operation accepting a wide range of parameters. When versioning is not required because compatibility is not broken, you must still remember to keep the Swagger-compliant spec in sync with your API and to update the JSON spec on your Admin Portal.

You can do that by editing the Service Spec by the web interface on your Admin portal.

Or automatically via the Account Management API. Every time the source of your API changes, the JSON file with the Swagger-compliant spec must reflect those changes. On the spec has been updated in your end you can update your Service Spec straight from your development environment like this:

The id of the Service Spec can be found on the URL when you are doing a preview.

If you are not familiar with the Swagger specification for RESTful APIs please check our HowTo on the Active Docs API Specification to learn how can you create a Swagger-compliant specification of your API.


This section will help you to create a Swagger-compliant spec for your restful API, which is required to power the Active Docs on your Developer Portal. It also showcase the Active Docs extensions to the Swagger spec that you can take advantage of.

If you only would like to read code, all the examples are on the following page github gist.

3scale Active Docs are based on the specification of restful web-services called Swagger (from Wordnik).

Swagger is not only a specification. It also provides a full feature framework around it. Namely,

  1. Servers for the specification of the resources in multiple languages (NodeJS, Scala, etc.).
  2. A set of HTML/CSS/Javascripts assets that take the specification file and generate the eye-candy UI.
  3. A code generator so that you can automatically create client-side libraries on multiple languages if you API has a swagger-compliant spec. Support to create client-side libraries in Scala, Java, Javascript, Ruby,PHP,and Actionscript 3 is already available. Support for Android and other client is on the making thanks to the fantastic Swagger team.

3scale’s Active Docs is not a Swagger replacement but rather an instantiation of it. With Active Docs you do not have to run your own Swagger server or deal with the UI components of the interactive documentation. The interactive documentation is served an rendered from your 3scale Developer Portal.

The only thing that you need to have is to build a Swagger-compliant specification of your API, add it on your Admin Portal and the interactive documentation will be all set. Your developers will be able to launch requests against your API through your Developer Portal.

If you already have a swagger-compliant spec of your API, you can just add it in your Developer Portal (see the HowTo on the Active Docs Configuration to learn how).

3scale extended the Swagger specification in several ways to accommodate certain features that were needed for our own interactive API documentation:

  1. Auto-fill of API keys
  2. Operations grouping by colors
  3. Support for more complex dataTypes: Hash, Array, Custom

WARNING – Any swagger-compliant spec can be used in 3scale’s Active Docs. However, the reverse is not necessarily true. If you use the extensions of 3scale you will not be able to use the original Swagger UI or the Swagger Code Generators since they will not be able to interpret the extensions (at least not yet).

Auto-fill and operations grouping are basically harmless since they only affect the UI, they will just be ignored by the original Swagger framework. However, the complex dataTypes will cause the Swagger code generators to fail or to generate a bogus client-side library. If you are planning on using the code generator to create client-side libraries for your API do not rely on the complex data types offered by Active Docs (Hash, Array, Custom).

In the remainder of this document we will state when we are dealing with an extension of the Swagger spec specific to 3scale’s Active Docs.

We recommend that you read first the original spec from the original source:

Swagger Specification.

On the Swagger site there are multiple examples of specs. If you like to learn by example you can follow the example of the Sentiment API.

The Sentiment API is an extremely simple API that does textual sentiment analysis. It is not meant for production for but for dissemination and learning.

The Sentiment API is composed of 3 methods:

  • GET /v1/words/{word}.json Returns the sentiment of a given word
  • POST /v1/words/{word}.json?value={value} Sets the sentiment value of a give word
  • GET /v1/sentence/{sentence}.json Returns the sentiment value of a given sentence

Because we want to add all the 3scale Management Layer goodies, we will have to add some extra parameters to each method. We chose the standard App ID authentication method (there are others).

Consequently, we need to add the parameters:

app_id={app_id}&app_key={app_key}

The $app_id$ and $app_key$ will be sent by the developers in their requests to your API. The developers will obtain those keys on your Developer Portal.

Upon receiving the keys, you will have to do the authorization check against 3scale using the Service Management API (This is diverting a bit from the Active Docs and getting into the Integration :-) Check the quickstart on integration for more details)

Summing up, for your developers the documentation of your API would look like this:

However, if you want the documentation to look sexy like this: Sentiment API Documentation with Active Docs.

Then you will have to create the following Swagger-compliant spec:

You can use this spec out-of-the-box to test your Active Docs. But remember that this is not your API :-).

At first it might look a bit cumbersome but the Swagger spec is not complex at all. Let’s dissect it a bit.

The Swagger specification relies on a resource declaration that finally maps to a Hash encoded in JSON. Let’s take the above sentiment_api_v1.json as example an go step by step…

The first level of the declaration of a resource. It’s important to note that the $basePath$ is the end-point of your API, not the Developer or the Admin Portal.

WARNING – The host must be a domain and NOT and IP address. 3scale will proxy the requests made against your Developer Portal to your basePath and render the results. This requires your basePath end-point to be white-listed by us for security reasons. You can only declare a basePath that you own: 3scale reserve the right to terminate your account if we detect that you are proxying a domain that does not belong to you. Notice that this means that localhost or any other wildcard domain will not work.

Troubleshooting: If your app is hosted on Amazon, and the proxy is not working, please check out an alternative to using the CNAME.

The $API corresponds to each unique URL path of your restful API. There is no restriction to have multiple $API entries for the same URL path.

The $ERROR does not affect the interactive UI, so unless you are planning to use Swagger’s code generators there is no need to define them.

To learn more about the declaration of errors please go to the official Swagger Specification page.

The $OPERATION roughly corresponds to the basic bar-like container that let your users do requests to your API via the interactive UI.

One of the fields of an operation is the $responseClass$, the response class maps the result of an operation to an object in the Models. For instance:

"responseClass":"List[user]"

This would mean that the operation returns a list of user objects, which would be defined in the Models. Again, these are only required if you want to use Swagger’s Code Generators to create client-side libraries. If you only want the UI goodness you skip defining Models.

Extension to the Swagger spec: Grouping

On the operations there the field group is an Active Docs extension to the Swagger spec. If you declare group on an operation it will maintain the same color scheme for all the other operations with the same group name. You can see an example on the code snipped above.

This extension only affects the UI, it will get ignored outside the domain of Active Docs.

The parameters of an operation. In the example below we have the example of the path parameter word in /word/{word}.json

There is some more meat into the parameters besides the fields in the example above.

Some additional handy fields:

  • “defaultValue”: “Default Value to be send unless edited by the user”
  • “allowMultiples”: true | false,
    this allows multiple values for the parameter
  • “allowedValues”: {},
    it sets a choice selector with pre-determined values. For instance, {"values": ["foo","bar"], "valueType": "LIST"}

Swagger supports the following $dataTypes$:

  1. “string”
  2. “integer”
  3. “long”
  4. “double”
  5. “boolean”
  6. Dates should be described as string values in ISO-8601
Extension to the Swagger spec: Complex Data Types

We have extended the $dataTypes$ to support more complex data structures:

  1. “custom”: defines a key value pair, the name of the parameter is ignored and the one provided by the user is taken. With this you can create user defined parameter like whatever=value
  2. “hash”: defines parameters that are maps, e.g. user[name]=jane&user[last_name]=smith
  3. “array”: defines arrays, e.g. transaction[0][foo]=1&transactions[1][bar]=42

For the extended $dataTypes$ you can define allowMultiple: true if you need to allow the user to add and remove parameters at will. Let us illustrate all this with an example, take the screenshot below of a fragment of our Service Management API

Service Management API example of the usage of complex dataTypes

The parameter user_id has a dataType string, quite typical. However, the parameters usage is a bit more complex, because it’s a hash. And on top of that, usage is a hash of parameters of type custom because the metric name is not known but is user-defined. This can be achieved using the allowMultiple and the fact that the array and hashdataTypes can have nested parameters, see the snippet…

The interactive UI will encode the results of the form generated by the spec as a hash (also known as dictionary, associative array, map, etc.) of custom defined parameters like this:

usage[hits]=1&usage[save]=1&usage[yet_another_metric]=42

Which is what your RESTful API would expect if were are expecting a hash (associative array, map, etc.) according to the URL encoding of complex data structures.

If you want to learn more about the usage of the extension to the Swagger spec regarding complex data types you can check the Service Management APIspec source. We recommend to view it with an online JSON viewer.

The JSON Editor Online is quite good, it gives a pretty format to compact json and it also provides a browser of the JSON object. We really recommend it if you are not well versed with the JSON notation.

Extension to the Swagger spec: Auto-fill of API keys

Another extension to the Swagger spec of 3scale’s Active Docs is the auto-fill of the API keys. On the parameters, you can define the field threescale_name with values app_ids, app_keys or user_keys depending on the authentication mode your API is in.

For instance, for the authentication mode App Id you might want to declare "threescale_name": "app_ids" for the parameter that represents the application id, and "threescale_name": "app_keys" for the parameter that represents the application key. Just like in the following snippet…

If you do so, Active Docs will automatically prompt the user of the Active Docs to log-in to the Developer Portal to get their keys as shown in the screenshot below:

Auto-fill when not logged-in

Or if the user is already logged-in, it will show the latest 5 keys that could be relevant for him so that he can test right away without having to copy paste their keys around.

Auto-fill when logged-in

The field threescale_name is also an extension to the Swagger-spec which will be ignored outside the domain of Active Docs.

At this point you should be able to create a Swagger-compliant spec for your API. Depending on your needs, you might have also decided to use the extensions to the spec provided by Active Docs: grouping, auto-fill of API keys and Complex Data Types.

How do you create the JSON file with the spec is something that is totally up to you. Some people prefer to write the JSON straight with their text editor, however, this might prove difficult to maintain in the long term.

Every time that you make a small change to your API (not a change that requires API versioning but a small change like updating a description or adding a new optional parameters) you will have to update the JSON spec accordingly. It’s very easy to forget to map the changes into the spec and eventually your docs will be out of sync.

To avoid this problem, it is advisable that you embed your specification on your own source-code. The Swagger team achieves this by annotating JAX-RS-aware classes with a series of annotations (you can see an example in the appendix Sample Resource and Operations Annotations on the Swagger site.

However, JAX-RS annotation is not wide-spread to all programming languages. It is available in languages like Java and Scala. For other languages you will have to rely on community maintained ports, if any.

We really recommend that you build your API spec via annotation rather that writing the spec from scratch. It might sound like an overhead at first but it’s going to pay off in the long run.

At 3scale, for instance, we use source2swagger, which is a very simple way to write JSON objects embedded in the comments of the source code.

The source2swagger binary can parse the source code and generate the Swagger-compliant JSON spec file that can be uploaded to Active Docs via your Admin Portal or via API (see the section of Active Docs in the 3scale support portal).

For instance, the spec of the Sentiment API we just saw before was generated using source2swagger. Here you can see a part of the Sentiment API source code that contains the annotations:

As you can see, it’s quite simple to annotate the source code via comments.

Source2swagger is by no means the only way to write the Swagger-compliant spec. You can use JAX-RS, write it straight from scratch, or whatever other tool is available.


If you have an OAuth enabled API you will want to show off its capabilities to your users. But how can you do this using your Active Docs? Well, even though this is a bit trickier than usual, it isn’t entirely impossible and we're going to show you how.
After completing these steps you will have a set of Active Docs that will allow your users to easily test and call your OAuth enabled API from one place. Before you start these steps, you will need to have configured the required OAuth endpoints for your chosen workflow. If you are using Nginx as your API proxy you will need to have followed the steps in the Set up Oauth with Nginx API Gateway proxy How To. Additionally, you will need to be familiar with how to set up Active Docs: Configure Active Docs and Active Docs Spec. My first example is for an API using the OAuth2 Client Credentials flow. This API simply returns the sentiment value (from 5 to -5) for a given word. My API is only accessible using a valid access token. So users of my API are only be able to call the API once they have exchanged their credentials (client\_id and client\_secret) for an access token. In order for users to be able to call my API from Active Docs they will need to request an access token. Since this is just a call to an OAuth Authorization server, I can create an Active Docs Spec for the OAuth Token endpoint. This will allow me to call this endpoint from within Active Docs. In my case, for a client credentials flow, my Swagger JSON spec it looks like the below: For a resource owner OAuth flow, you will probably also want to add parameters for a user name and password as well as any other additional parameters that you require in order to issue an access token. For my client credentials flow example, I am just sending the client\_id and client\_secret - which can be populated from the 3scale values for a signed in users - as well as the grant type. Then, in the AD Spec for my Sentiment API I need to add the access\_token parameter instead of the client\_id and the client\_secret. I can then include my Active Docs in my Developer Portal as per usual. In this case, since I want to specify the order in which they display to have the OAuth endpoint first, this looks like this: Active Docs use XMLHttpRequests to generate the documentation from the JSON spec as well as make the calls to the API endpoints. We can "hijack" these calls to listen for the point at which the different requests have completed and extract data from them. I am going to use this mechanism to extract the access token from the call and populate it in the rest of the documented methods. You can see an example of this in action at: https://sentiment-api.3scale.net/docs Say we have another application which holds address data for a user. We want to expose an API for this application which will allow retrieval of the addresses stored by a particular user. In this case we will need our users to authorize any third party applications using our API to acccess their data. We will use the Server-Side Web Applications OAuth Flow for this purpose. Both the Client-Side (Implicit) and Server-Side Web Applications Flow has some extra steps to get the access token, which makes the workflow a bit more complicated. We're going to use Google’s OAuth playground to help us out with these extra steps. First of all, we need to embed the Google OAuth playground in an iframe from within our Active Docs. This will look something like this:
We can do this by inserting the following iframe in the CMS page for our Active Docs. These links set up all of the correct parameters so that our settings in the Google OAuth playground are correctly populated to point to our OAuth enabled application as well as populating the credentials for signed in users. You will need to edit the following parameters in the Google OAuth playground url to match your endpoints and desired scope(s) - response_type=[token|code] (e.g token for Client-Side Oauth Flow, code for Server-Side OAuth Flow) - scopes={the scope for using Active Docs calls} (e.g this can be the Application Plan in which case it could be pre-populated using liquids.) - oauthAuthEndpointValue={your Authorization endpoint} (e.g https://address-book-app-webhooks.herokuapp.com/oauth/new ) - oauthTokenEndpointValue={your Token endpoint} (e.g https://address-book-app-webhooks.herokuapp.com/oauth/token.json) In order to allow your authorization page to display within an iframe, you will need to set cross frame options for it. In my case I did this by setting the following setting in the application.rb file for my rails application: In our case, I have made this just possible from the Google OAuth playground to prevent any malicious use from elsewhere. You might also need to make the equivalent change for your own application.

You can see an example of this in action at: https://address-book-app.3scale.net/docs And that's it. You should now be able to allow your users to call your OAuth enabled API from your Active Docs.

In order to support a formal development lifecycle for your developer community, you may want to provide separate sandbox and production environments to access your API. Thus during development and testing the API may be used without the adverse consequences of operating in a production environment. Note this is different from managing the dev, test, stage, deploy cycle of the API itself.

By the end of this HowTo you will be able to setup 3scale to differentiate between your production and sandbox environments.

There are several options to provide differentiated sandbox and production environments for your developer community - 3scale supports a lot of flexibility on how to implement this. Once you decide on which approach is right for you, you could implement this within the Nginx proxy as a custom extension within the config files.

This tutorial describes two ways of providing differentiated environments:

This option is simple to setup and simple during operational use. The main limitations are that both environments share the same backend, and in reporting views the production and sandbox traffic is mixed together

In this option you would create one Application plan for each environment; and set the rate limits and availability of methods/metrics depending on the environment (setting rate limits). For example in the sandbox mode to provide lower rate limits and optionally restrict access to any methods that are not desirable for the sandbox, for example expensive resources, or write/delete methods.

Environment-specific rate limits

On your integration side you would have to implement the mapping to the corresponding 3scale methods. Remember that this is only simulating environments and not hitting two different endpoints.

For example assuming we have a developer app under the sandbox plan which is restricted from "write" calls, this is the auth call to 3scale representing the mapping for POST to /words.....:

and the response will be 409 with the body:

Whenever desired the plan can be upgraded from sandbox to production without coding changes by the developer: 1) a self-service plan change in the developer portal, 2) a request to the API provider to make the plan change, 3) a plan change determined unilaterally by the API provider. This will depend on your Service settings.

This option allows differentiation of the API backend for each environment. The operational use is just as simple as option 1. The main different is that the implementation is slightly more complicated (requiring custom modifications to the Nginx config files). Due to the need for Nginx to parse the response bodies there will also be a performance hit.

In this scenario the backend provides different response logic for the two modes, but the developer should not have to make any coding changes to switch between the two environments. This is achieved in 3scale by use of the Nginx proxy to route calls based on the authorization response from 3scale indicating whether production calls are enabled or disabled in the respective application plan. For example when an app under the sandbox plan makes a call, the proxy does the auth request to 3scale not knowing if this is a call to sandbox or production. The call might look like this: and the response is parsed for 'plan' to determine whether to route the call to the sandbox or production backend

Next steps assume you have two environments called exactly "Sandbox" and "Production". If you want to use different names, just define them appropriately in the API section of your admin's console and verify the returned value from the authorization call

On the Nginx side you would have to apply some modifications to the configuration files generated by 3scale. First define a new upstream, e.g. Next assign the server name for your services (not obligatory if you have only one server): where YOUR-SERVICE-DOMAIN-FOR-NGINX will be the domain(s) assigned to the server where Nginx is hosted. Having this we will have to specify the '.lua' file path on your server in the 'location: /' part of the config file:

With the '.conf' file customization finished we can pass to the '.lua' file, where the logic responsible for conditional proxy pass resides. Find the line starting with function authrep(params, service) and inside that function definition apply the following changes:

  • Comment out lines like here: note that the second line is NOT COMMENTED
  • Just after this commented lines add the following code:
This code uses a regular expression to match the response with the plan's name. If you want to experiment more with regular expressions try Rubular.

Now the calls registering usage in certain environments will automatically hit these environments without any additional changes on the developer's side. As soon as the Application Plan is switched from Sandbox to Production or vice versa, the API calls will automatically be re-routed to the correct backend.

if any of your environments reside on a hosted server instance (e.g. heroku, google app, etc.) you will have to do a hostname rewrite (send a proper host name in the headers). To do that add the following lines:

  • in '.conf' file under the 'location: /' part add set $host null;
  • in '.lua' file in the code injected by you add:
    ngx.var.host = "SANDBOX-HOSTNAME" in the if plan == "Sandbox" condition (SANDBOX-HOSTNAME can be e.g. 'application.herokuapp.com')
    ngx.var.host = "PRODUCTION-HOSTNAME" in the if plan == "Production" condition.

<