API Configuration

Application plans define the different sets of access rights you choose to allow for consumers of your API. These can determine rate limits, which methods or resources are accessible, and which features are enabled.

By default, when your 3scale account is created, you are given two plans: Basic and Unlimited. You can keep and edit these or create your own, and you can create as many plans as you need.

To create a new application plan, follow these steps:

  1. Go to the API tab
  2. Look for the Application plans section
  3. Click on Create Application Plan

create new plan

On the next screen, choose a name and a system name (system names must be unique) for your new plan. If the Applications require approval? checkbox is selected, no applications will be able to access your API without approval.

publish new plan Once you’ve created a plan, you can provision rate limits and set up paid plans.

After you’ve created all your plans, you can select a default plan for when developers sign up to register their applications. To do so, go to API > Application plans and select the default plan:

default plan

If you don’t indicate a default application plan, when a new user signs up to get access to your API, there won’t be an application created by default (meaning they won’t really get access to your API).


There are several different ways to add 3scale management to your API – including using a Nginx Proxy, Amazon API Gateway with Lambda, 3scale Service Management API, or code plugins. This Howto drills down into how to use the code plugin method to get you setup.

By the time you complete this HowTo you will have been able to configure your API to use of the available 3scale code plugins to manage access traffic.

3scale API plugins are available for a variety of implementation languages including Java, Ruby, PHP, .NET and others – the full listing can be found in the code libraries section. The plugins provide a wrapper for the 3scale Service Management API to enable:

  • API Access Control and security
  • API Traffic Reporting
  • API Monetization

This wrapper connects back into the 3scale system to set and manage policies, keys, rate limits and other controls that you can put in place via the interface – see the Hello World API QuickStart guide to see how to configure these elements.

Plugins are deployed within your API code to insert a traffic filter on all calls as shown in the figure.

Once you have your 3scale account created (signup), navigate to the code libraries section on this site and choose a plugin in the language you plan to work with. Click through to the code repository to get the bundle in the form that you need.

If your language is not supported or listed, let us know and we’ll let you know if there are any ongoing support efforts for your language. Alternatively you can connect directly to the 3scale Service ManagementAPI.

As described in the Hello World API QuickStart guide, you can configure multiple metrics and methods for your API on the API control panel. Each metric and method has a system name which will be required when configuring your plugin. You can find the metrics in the application plans area of API settings in your admin portal.

For more advanced information on metrics, methods and rate-limits see the specific HowTo Guide on rate limits.

Armed with this information, returning to the code, add the downloaded code bundle to your application. This step varies for each type of plugin and the form that it takes depends upon the way each language framework uses libraries. For the purposes of this example we’ll proceed with PHP, instructions. Other plugins integration details are included in the README documentation of each repository.

For PHP: Require the ThreeScaleClient.php file (assuming you placed the library somewhere within the
include path):

Then, create an instance of the client, giving it your provider API key:

Because the object is stateless, you can create just one and store it globally.

To authorize a particular application, call the `authorize` method passing it the application id and optionally the application key:

Then call the `isSuccess()` method on the returned object to see if the authorization was
successful:

If both provider and app id are valid, the response object contains additional information
about the status of the application:

If the plan has defined usage limits, the response contains details about the usage broken down by the metrics and usage limit periods.

If the authorization failed, the `getErrorCode()` returns system error code and `getErrorMessage()` human readable error description:

To report usage, use the `report` method. You can report multiple transaction at the same time:

The `“app_id”` and `“usage”` parameters are required. Additionaly, you can specify a timestamp
of the transaction:

The timestamp can be either an unix timestamp (as integer) or a string. The string has to be in a
format parseable by the [strtotime](http://php.net/manual/en/function.strtotime.php) function.
For example:

"2010-04-28 12:38:33 +0200"

If the timestamp is not in UTC, you have to specify a time offset. That’s the “+0200”
(two hours ahead of the Universal Coordinate Time) in the example above.

Then call the `isSuccess()` method on the returned response object to see if the report was
successful:

In case of error, the `getErrorCode()` returns system error code and `getErrorMessage()`
human readable error description:

(Note that as well as reporting traffic separately from authorizations this can be done in a single call to the AuthRep Method instead of report in the first instance.).

Once the required calls are added to your code you can deploy (ideally to a staging / testing environment) and make API calls to your endpoints. As the traffic reaches the API it will be filtered by the Plugin and keys checked against issues API credentials (refer to the Hello World API QuickStart for how to generate valid keys – a set will have also been created as sample data in your 3scale account).

To see if traffic is flowing, log into your API provider dashboard and navigate to the Analytics tab – here you will see traffic reported via the plugin.


Once the application is making calls to the API, they will become visible on the Statistics dashboard and Statistics -> Usage page for more detailed view.

If you’re receiving errors in the connection you can also check the “Errors” menu item under Monitoring.

This HowTo describes the simple form of plugin use with Synchronous calls to the 3scale API, but there is also an asynchronous variation possible, as well as proxies with key caching via Nginx proxy. You can enable an asynchronous integration via plugin, but this involves adding a caching mechanism for the authentication response from 3scale. The implementation depends on your requirements, but for reference you can check how it is implemented in the nginx configuration files you can download from our admin portal (API->Integration, set to Nginx on premise gateway and download the configuration files).


This tutorial describes the necessary steps to set up an integration with the 3scale API Management Platform using a gateway. The gateway mode allows you to complete the integration with 3scale without having to modify the source code of your API or having to redeploy your API.

There are two main choices about how to deploy the gateway -- whether it is hosted by 3scale or on premise.

  • Hosted by 3scale: In this case, 3scale hosts the gateway for you in the cloud. There are two hosted environments: staging and production. The first is meant to be used only while configuring and testing your API integration. When you have confirmed that your setup is working as expected, you can choose to deploy it to the production environment. Or at this point, you can download the configuration files and use it on-premise.
  • On premise: Once you have configured your gateway in staging mode, you will be able to download the configuration files to run your own gateway on premise. The local gateway will behave exactly the same way as the hosted gateway and no further configuration will be required to launch your API. On-premise mode is the recommended mode of operation for production environments.

Get API traffic in 1 minute

Below is a screenshot of the gateway configuration page. You can access it from the API > Integration section of your 3scale Admin Portal.

Setting up your API in the staging proxy

Step 1: Declare your private base URL

The private base URL is the endpoint host of your API backend. For instance, if you were Twitter, the private base URL would be https://api.twitter.com/. oIf you are the owner of the Sentiment API, it would be http://api-sentiment.3scale.net/.

The gateway will redirect all traffic to your API backend after all authentication, authorization, rate limits, and statistics have been processed.

Step 2: Update the hosted gateway

To update the hosted gateway, save the settings (API backend, advanced settings, etc.) by clicking on the Update & Test Staging Configuration button in the lower right part of the page. This process will deploy your gateway configuration (at this stage the default configuration) to 3scale's hosted gateway.

Step 3: Get a set of sample credentials

Go to the Applications tab and copy the credentials (the keys) of any of your API users. If you do not have users yet, you can create an application yourself (from the details page of any individual developer account) and the credentials will be generated automatically.

Typically the credentials will be a user_key or the pair app_id/app_key depending on which authentication mode you are in. Note that the 3scale hosted gateway does not currently support OAuth. However you can configure it with the on-premise configuration files, or use a plug-in integration approach.

Step 4: Get a working request to your API

We are almost ready to roll. Go to your browser (or command line curl) and do a request to your own API to check that everything is working on your end. It could be something like this:

http://api-sentiment.3scale.net/v1/word/good.json

Note that you’re not using the 3scale gateway yet. You’re just getting a working example that will be used in the next step.

Step 5: Closing the circle

Now do the same request but replace your private base URL (in the example http://api-sentiment.3scale.net:80) by your hosted endpoint (if you were Twitter, you would need to change the https://api.twitter.com to https://api-xxxxxxxxxxxxx.staging.apicast.io). . You also have to add the parameters to pass the credentials that you just copied.

Continuing the example, it would be something like:

https://api-2445581436380.staging.apicast.io:443/v1/word/good.json?user_key=YOUR_USER_KEY

If you execute this request, you’ll get the same result as in step 4. However, this time the request will go through the 3scale hosted gateway.

That's it! Your API is now integrated with 3scale.

3scale's hosted gateway validates the credentials and applies any gateway rules that you’ve defined to handle rate limits, quotas, and analytics. If you haven’t touched the mapping rules, every request to the gateway will increase the metrics hits by 1 by default. You can see how much your hits metric has increased in your Admin Portal.

If you want to experiment further, test to see what happens if you try to use credentials that don’t exist. The gateway will respond with a generic error message (you can also define a custom one).

You can also set a rate limit of 1 request per minute. After you try your second request within the same minute, you’ll see that the request doesn’t reach your API backend. The gateway stops it because it violates the quota that you just set up.

Gateway basics

Now that you have your gateway up and running, it’s helpful to learn more about your basic configuration options. For more advanced use cases, you should check the Advanced Nginx section.

Endpoints

  • What is a private base URL? The private base URL is the endpoint of your API. It’s where the gateway will forward the requests that it receives.
  • The API backend can also be HTTPS. In this case, you just use the appropriate protocol and port -- for example: https://api-sentiment.3scale.net:443
  • What is the public base URL? The public base URL is where your developers will send requests when using your API. This applies to the hosted mode only. In on-premise mode, you can set a custom public endpoint.
  • The public base URL is set by 3scale and cannot be changed.

Host header

This option is only needed for API backends that reject traffic unless the Host header matches the expected one. In these cases, having a gateway in front of your API backend will cause problems since the Host will be that of the gateway -- for example: xxx-yyy.staging.apicast.io

To avoid this issue, you can define the host your API backend should expect in the Host Header field, and the hosted gateway will rewrite the host.

Host Rewrite

Deployment history

Every time you click on the Update & Test Staging Configuration button, the current configuration will be deployed to the 3scale hosted gateway. From that point on, API requests will be handled by the new configuration.

It is not possible to automatically roll back to previous configurations. Instead, we provide a history of all your deployments with the associated configuration files. These files can be used to check what configuration you had deployed at any moment on time. You can recreate any deployments manually.

Mapping rules

By default, you start with a very simple mapping rule.

Mapping Rules

This rule says, that any GET request that starts with "/" will increment the metric hits by 1. Most likely you will remove this rule since it is too generic.

The mapping rules define which metrics (and methods) you want to report depending on the requests to your API. For example, here you can see the rules for the Sentiment API:

Mapping Rules

The rules are matched by prefix and can be arbitrarily complex. The notation follows Swagger and ActiveDocs specifications.

  • You can do a match on the path over a literal string:

    /v1/word/hello.json

  • Mapping rules can contain named wildcards:

    /v1/word/{word}.json

    This rule will match anything in the placeholder {word}, making requests like /v1/word/awesome.json match the rule.

    Wildcards can appear between slashes or between slash and dot.

  • Mapping rules can also include parameters on the query string or in the body:

    /v1/word/{word}.json?value={value}

    Both POST and GET requests follow the same notation. The gateway will try to fetch the parameters from the query string when it's a GET and from the body when it's a POST, DELETE, PUT.

    Parameters can also have named wildcards.

Note that all mapping rules are evaluated. There is no precedence (order does not matter). If we added a rule /v1 to the example on the figure above, it would always be matched for the requests whose path starts with /v1 regardless if it is /v1/word or /v1/sentence. Keep in mind that if two different rules increment the same metric by one, and the two rules are matched, the metric will be incremented by two.

Mapping rules workflow

The intended workflow to define mapping rules is as follows:

  • You can add new rules by clicking the Add Mapping Rule button. Then you select an HTTP method, a pattern, a metric (or method) and finally its increment. When you are done, click Update & Test Staging Configuration to apply the changes.

  • Mapping rules will be grayed out on the next reload to prevent accidental modifications.

  • To edit an existing mapping rule you must enable it first by clicking the pencil icon on the right.

  • To delete a rule click on the trash icon.

  • Modifications and deletions will be saved when you hit the Update & Test Staging Configuration button.

Running your gateway on premise (production)

Once you’ve set up your gateway and achieved the desired behavior, you can run it locally on your own servers (on premise). Please follow the steps described in the Nginx on-premise setup guide to get your on-premise Nginx API gateway up and running. You can also check the Advanced Nginx guide for some more advanced features of the 3scale API gateway.


The APICast Cloud Gateway is the best deployment option if you want the fastest possible launch for your API, or if you want to make the minimum infrastructure changes on your side.

Once you complete this HowTo you will have your API fully protected by a secure API Gateway in the Cloud

  • You have reviewed the deployment alternatives, and have decided to use the APICast Cloud Gateway (hosted proxy) to integrate your API with 3scale.
  • Your API backend service is accessible over the public Internet (a secure communication will be established to prevent users from bypassing the access control gateway).
  • You do not expect demand for your API to exceed the limit of 50,000 hits/day (beyond this it is recommended to upgrade to the on premise gateway).

The first step is always to set up your API in the Staging environment, where you will define the private base URL and its endpoints, choose the placement of the credentials and other configuration details that you can read about here. Once you are done entering your configuration, go ahead and click on Update & Test Staging Configuration button to run a test call that will go through the Staging proxy to your API.

Setting up your API in the staging proxy

If everything was correctly configured, you should see a green confirmation message. Also, the button that will allow you to deploy the production gateway will be enabled now.

Before moving on to the next step you should make sure that you have configured a Secret Token to be validated by your backend service. You can define the value for the Secret Token under Authentication Settings. This will ensure that nobody can bypass the access control of the API Gateway.

Both the Staging and Production proxies have base URLs in apicast.io domain. You can tell them apart easily because in the Staging environment the URLs have a staging subdomain. For example:

Bear in mind

  • Maximum quota of 50,000 hits/day are allowed for your API through the production Cloud Gateway. You can check your API using in Analytics section of your admin portal.
  • A throttle limit of 20 hits/second is a hard limit on any spikes in API traffic.
  • Above the throttle limit the Gateway returns a response code of 403. Note that this is the same as the default for an application over rate limits. If you want to differentiate the errors, please check the response body.


This is a step-by-step guide to deploy Nginx on your own server and have it ready to be used as a 3scale API gateway.

The 3scale API gateway requires some external modules for Nginx. Even though it is possible to compile it with these modules from source, we recommend using Openresty. It is an excellent bundle that already includes all necessary requirements.

This guide covers the setup steps for Ubuntu/Debian. The dependencies for other Linux versions are well documented in the Openresty installation guide.

Start by installing the necessary system dependencies and libraries:

sudo apt-get install libreadline-dev libncurses5-dev libpcre3-dev libssl-dev perl make

Check which is the latest stable version of Openresty here.

Download Openresty (replace the version number with the one corresponding to the latest stable version):

wget http://openresty.org/download/ngx_openresty-1.9.7.2.tar.gz

Extract the downloaded .tar.gz file as:

tar xzvf ngx_openresty-1.9.7.2.tar.gz

Go to the extracted directory and run the configuration step:

cd ngx_openresty-1.9.7.2
./configure --with-luajit --with-http_iconv_module -j2

This will set the environment so that Nginx will be installed in the following path: /usr/local/openresty/

We recommend using the default path, but in case you would need to change that you can use the --prefix=PATH option when invoking the configure step. Keep in mind that in that case some of the instructions in this document might be slightly different for you.

Build and install:

make
sudo make install

To transform Nginx into a 3scale API gateway ready-to-use for your API you just need to download your configuration files from your 3scale admin portal.

If you haven't yet configured your API endpoints in 3scale, do it now. Take a look at the Nginx Gateway guide to learn how to do it.

When you are done setting up your API in 3scale, head over to your admin portal. Go to the API > Integration section and and follow the next steps:

  1. Go to the Production section

  2. Add a Public Base URL

    This will be the public address of your API gateway in the production environment. It will be used to customize the server_name directive in the Nginx config file, which would otherwise be set to the variable $hostname. When you are using multiple services:

    • If each service has its own domain, you must specify a Public Base URL for each service.
    • If the Public Base URL is defined, only the requests to that domain will be processed by Nginx.

  3. Download the config files

    Download Nginx configuration files

The downloaded .zip file will contain the following files:

  • The nginx_XXX.conf is a typical Nginx config file. You can customize it to meet you production environment requirements: set the number of worker processes, configure the logging etc. (check Nginx documentation to learn more).

    You also need to check the following things:

    • Check that the correct port is specified in the listen directive in the server block and change if needed.
    • If you want to use HTTPS on your gateway, you need to make necessary changes in your Nginx configuration file. Please refer to Nginx documentation for more information.

  • The nginx_XXX.lua file contains the logic that you defined in your admin dashboard: authentication methods, mapping rules etc. Obviously, you can modify the file to add new features or to handle custom requirements that are not supported by 3scale's admin web interface.
  • (optional) If you have chosen OAuth authentication method, you will also see other lua files: authorize.lua, get_token.lua etc. These files contain the necessary code to handle OAuth authentication.

If you are using the default Openresty path, you must copy all the files (.conf and .lua) to /usr/local/openresty/nginx/conf/ directory. Otherwise, if you specified a custom PATH (with the --prefix parameter on the configure step), you will need to put the files to the directory PATH/nginx/conf (for example, if you used --prefix=/opt/openresty, the directory for the .conf and .lua files would be /opt/openresty/nginx/conf).

The only thing left is to start your Nginx-based API gateway. It can be done with the following command (replace nginx_XXXXXXX.conf with the name of the .conf file you downloaded):

sudo /usr/local/openresty/nginx/sbin/nginx -c /usr/local/openresty/nginx/conf/nginx_XXXXXXX.conf

You can also run a syntax error test on the configuration file by running the previous command with the -t parameter.

You can control the running Nginx by executing the same command with the parameter -s:

sudo /usr/local/openresty/nginx/sbin/nginx -c /usr/local/openresty/nginx/conf/nginx_XXXXXXX.conf -s SIGNAL
SIGNAL may be one of the following:
  • stop – fast shutdown
  • quit – graceful shutdown (waits for the worker processes to finish serving current requests)
  • reload – reloading the configuration file

The Nginx logs by default will be in the /usr/local/openresty/nginx directory. If you have any problems with the set up, you might want to check the error.log for details.

Instead of having to type the full path to the executable and configuration files every time, Nginx can be configured to be operated through the Linux service command.

To do so, you should create an init.d script for Nginx. This is a script that describes the environment in which Nginx will be run: location of the binary, the configuration and logs, and several other variables.

You can get an init.d script for Openresty from here: https://gist.github.com/vdel26/8805927.

Copy that script to the /etc/init.d/ directory and make it executable by running:

sudo chmod +x /etc/init.d/nginx

Edit the file if necessary to make sure that the CONF variable points to the right configuration file. By default it expects the file to be named nginx.conf so change this variable if your configuration file has a different name.

In case you have installed Nginx to a different location than /usr/local/openresty (the default), you will also need to edit the PREFIX variable so that it points to the right location.

Once you are done editing the file, run the following command to set it up:

sudo update-rc.d nginx defaults

Now you will be able to invoke the usual operations in Nginx but in a more convenient way.

  • starting Nginx: sudo service nginx start
  • stopping Nginx: sudo service nginx stop
  • restart Nginx: sudo service nginx restart
  • run a syntax test on the configuration file: sudo service nginx test
There are times when you are probably switching between different configuration files. This is a common situation when you are troubleshooting an issue or simply adding new directives to your configuration.

We recommend using symbolic links to make such scenario less cumbersome. You can keep different versions of your configurations in separate directories and then link them to the place where Nginx expects the configuration file to be.

Then when changing the version of the configuration currently running you will only need to change the link and restart Nginx.

For example, to create a symbolic link:

sudo ln -sf /home/ubuntu/My-Nginx-Configs/v2/nginx-v2.conf /usr/local/openresty/nginx/conf/nginx.conf

Note that the -f option will overwrite the target file, if it already exists

Keep in mind that you will also have to link the Lua file that holds the other part of your 3scale configuration:

sudo ln -sf /home/ubuntu/My-Nginx-Configs/v2/nginx-v2.lua /usr/local/openresty/nginx/conf/nginx.lua


The easiest and most powerful way to integrate your API with 3scale is using our Nginx-based API gateway. To make integrating your API with 3scale even easier we provide the gateway available as an AMI in the AWS Marketplace.

This is a zero-setup solution that will get you up & running with your traffic going through Nginx and using 3scale in a matter of minutes.

3scale AMI listing

The AMI contains:

  • a preinstalled Openresty bundle, including Nginx and complementary modules (such as the Lua scripting support).
  • a helper command line tool to get the Nginx configuration generated by 3scale for your API.
  • You will need an AWS account.
  • You should have configured your API details in 3scale beforehand to be able to automatically generate your Nginx configuration. In case you are not sure, read how to configure your API in this tutorial.
  1. Go to the 3scale AMI page in the AWS Marketplace
  2. You have two options to launch the AMI: 1-click launch or manual launch through the EC2 console. Pick the 1-click launch since it is the simplest way.
  3. Using the 1-click launch option, these are the settings where you will need to make a choice (go with the defaults for all the others unless you have good reasons to change them):
    • AWS region
    • EC2 instance type
    • Key pair (very important – pick a key pair for which you have the corresponding private key available on your computer, otherwise you won't be able to access the instance)
  4. Click the Launch with 1-Click button.
  5. Your instance of the AMI is now being started, it will be ready in about 2 minutes.
  6. Head over to the your AWS Management Console and go into the running instances list on the EC2 section.
  7. Check that your instance is ready to be accessed. It is indicated by a green check mark icon in the column named Status Checks.
  8. Click on the instance row, find its public DNS in the lower part of the screen and copy it.
  9. Log in through SSH using the ubuntu user and the private key you chose before. The command will look more or less like:
    ssh -i privateKey.pem ubuntu@ec2-12-34-56-78.compute-1.amazonaws.com
  10. Once you log in, read the instructions that will be printed to the screen: all the necessary commands to manage your gateway are described there. In case you want to read them later, these instructions are located in a file named 3SCALE_README in the home directory.

The fastest way to get your Nginx configuration files from 3scale is by using the command line tool included in the AMI. You just need to run the following command:

download-3scale-config

You will be prompted to enter the following parameters:

  • your 3scale admin domain (e.g. mycompany-admin.3scale.net)
  • your provider API key (it can be found in the Account section of your admin portal)
  • the directory where you want the files to be downloaded: if you simply click Enter they will be downloaded to /home/ubuntu/3scale-nginx-conf

The tool will save your credentials locally, so that if you make changes to your configuration (for example when you add a new endpoint mapping) you can just run the command without entering them again.

In case you need or want to be prompted again for your credentials you can run the command with the reset option:

download-3scale-config --reset

Once you have downloaded the files, there are a couple of changes that you must do to the file that has the .conf extension before you are ready to start Nginx. You can find the detailed information on the required changes in the Nginx on-premise setup guide.

You can now start running API gateway with your own configuration! Assuming you downloaded it to the default location, the files will now be in the directory /home/ubuntu/3scale-nginx-conf/. You should move or copy them to the Nginx configuration directory, which in this case is /opt/openresty/nginx/conf/.

Then you will need to run the following command to start Nginx:

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/YOUR-CONFIG-FILE.conf

You will find other useful commands to operate Nginx in the 3SCALE_README document.

To stop the Nginx:

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/YOUR-CONFIG-FILE.conf -s stop

To reload it (useful after you have made changes to the configuration):

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/YOUR-CONFIG-FILE.conf -s reload

If you like the AMI, please leave a 5-star review in the AWS Marketplace listing.
In case you experience any problem, let us know at support@3scale.net.

Creating and testing new versions of your Nginx configuration in a remote server can be quite cumbersome.

In Nginx on-premise setup guide you will find tips to make that process easier, including adding Nginx as an system service and some advice on managing multiple versions of your configuration files.

Troubleshooting

Most errors in the gateway configuration can be detected and solved by looking at the Nginx logs:

  • access log: /opt/openresty/nginx/logs/access.log
  • error log: /opt/openresty/nginx/logs/error.log

You can find more information about Nginx in the official documentation page: http://nginx.org/en/docs/

Chef is a configuration management tool that automates and simplifies software installation by using reusable configuration scripts called Cookbooks.


The 3scale API gateway is one of the integration methods that 3scale customers use to integrate their APIs with the 3scale API Management Platform. It’s based on OpenResty, a bundle that includes Nginx and some very useful third-party modules that complement it with features such as support for Lua scripting.

This tutorial describes how to use the official 3scale Chef Cookbook to automate the deployment of your API gateway.

The 3scale Chef Cookbook allows any Chef user to automate the deployment of the 3scale API gateway. Running the Cookbook on one or multiple target nodes will install OpenResty plus all the necessary system dependencies required to run it. After the execution completes, the nodes will have an up-and-running gateway listening for incoming API requests.

The Cookbook not only installs the API gateway, but it will also deploy your 3scale Nginx configuration files, specifically tailored for your API configuration, to the exact location they are needed.

This tutorial assumes familiarity with how Chef works and a ready-to-use Chef environment. If that’s not your case, here are some resources that will help you get to that point:

You’ll also need to have previously configured your API in your 3scale admin dashboard. If you haven’t gone through that step yet, you should do it now. You can follow the instructions here (stop at the part about running your proxy on premise).

The first step is to add the default recipe of the Cookbook to your node or role run list.


      {
        "run_list": [
          "recipe[chef-3scale::default]"
        ]
      }
    

There are 4 attributes that you’ll need to set to configure how you use the Cookbook. All of them are under the 3scale namespace.

  • ['3scale']['config-source'] – Where your Nginx configuration files will be taken from. Two options: “local” or “3scale”. Read the section of this tutorial named “Applying your own 3scale configuration” before setting this attribute.
  • ['3scale']['provider-key'] – The key that identifies you as a 3scale customer. It can be found in the “Account” menu of your 3scale admin portal..
  • ['3scale']['admin-domain'] – If your 3scale admin portal domain is “mycompany-admin.3scale.net”, then the value of this attribute should be “mycompany”.
  • ['3scale']['config-version'] – Version ID. If not included, the current configuration from your 3scale account will be used. If included, the value must be a timestamp of one deployment, formatted as in the following example: “2015-09-15-041532”. See the “Rollback process” section for more information on this.

Here you can see the default value each one of this attributes has: https://github.com/3scale/chef-3scale/blob/master/attributes/default.rb

This Cookbook uses and depends on the OpenResty Cookbook, so attributes of that Cookbook are also available to you. You can see a full list here.

Since you’ll be using the Nginx configuration files that 3scale generates for you, you won’t be able to use the attributes of the OpenResty Cookbook related to configuration parameters that go in the nginx.conf file.

Here is an example of a JSON node description ready to be used. In this case the configuration files will be downloaded from 3scale – as you can see in the config-source attribute

:

    {
      "3scale": {
        "config-source": "3scale"
        "provider-key": "MY_PROVIDER_KEY",
        "admin-domain": "mycompany",
      },
      "openresty": {
        "source": {
          "prefix": "/etc"
        }
      },
      "run_list": [
        "recipe[chef-3scale::default]"
      ]
    }
    

For the API gateway to be configured for your own API endpoints, you need to deploy it using your own set of Nginx configuration files. There are two ways to apply your own configuration files to the Cookbook:

This is the option you should use if:

  • your Nginx configuration has any customization on top of the default files generated by 3scale
  • you have more than one service in 3scale (since you will need to set the domains for each of them in the configuration)

Configure your API in 3scale using the “On-premise Gateway” option Click on “Download the Nginx Config files” at the bottom of the screen Drop those files on the /files/default/config/ directory of the Cookbook.

To use this option you’ll need to set the ['3scale']['config-source'] attribute to “local” in your node or role description.

With this option, the Cookbook will automatically fetch the Nginx configuration files from your 3scale account when running the deployment. You’ll need to set the following attributes in your node or role description:

  • ['3scale']['config-source'] = “3scale”
  • ['3scale']['provider-key'] = (see attributes section)
  • ['3scale']['admin-domain'] = (see attributes section)

In both cases, the Nginx configuration files will be copied to a subdirectory in the /var/chef/cache and symlinked to the Nginx working directory (/etc/nginx/).

The 3scale Cookbook allows rolling back to a previously deployed version of the configuration. This can be used in those cases where you have a node where the API gateway had already been deployed one or multiple times, and you want to deploy it again but using the configuration files from one of the previous deployments instead of the latest version.

The built-in way to roll back is by using the ['3scale']['config-version'] attribute. Here’s an example of a full node description using the rollback attribute:


{
  "3scale": {
    "provider-key": "YOURPROVIDERKEY",
    "admin-domain": "mycompany",
    "config-version": "2015-09-15-050545"
  },
  "openresty": {
    "source": {
      "prefix": "/etc"
    }
  },
  "run_list": [
    "recipe[chef-3scale::default]"
  ]
}

Troubleshooting

If you’re having problems deploying your API gateway when running the Cookbook, the best first step is to look at Chef’s own logs. Here you can find some useful debugging tips.


If the deployment completed successfully, but the API gateway is not running as expected, the problem is probably in the Nginx configuration files you deployed. The best place to start troubleshooting is the Nginx error log, located at /var/log/nginx/error.log


If there are no errors in the Nginx log, you might want to double check how you configured your API in 3scale. There are plenty of resources available on our support portal, such as this debugging guide.


Please note that OAuth Authentication mode is not available on the APICast cloud hosted API gateway.

This how to shows the necessary steps to set up Nginx with 3scale's OAuth extensions to make Nginx act as an OAuth2 provider.

Currently, only the Authorization Code (Server-Side) Grant Flow is available from the Nginx Integration Wizard. However, you can find nginx config templates for all other flows on our github repository here.

SSL use is mandatory for all OAuth 2 calls.

As 3scale doesn't hold any details about the users that you want to authenticate, in order to integrate with 3scale using OAuth2, we require that you handle user authentication on your side.

In order to do this, you will have to provide the url for a page where Nginx can send users to authorize an application. This page should be behind a login so that the user can be correctly identified and authenticated. Once the user has been authenticated, and the application authorized, the API provider should redirect back to their API Gateway with the outcome of the authorization grant from the user.

When the API Gateway redirects a user to the authorization url, it will also send the following parameters along with the request:

  • scope: the plan id which the application belongs to. The application plan defines the scope in 3scale.
  • state: a hash value shared between the API Gateway and the API to identify request and ensure .
  • tok: the value of the access token that will be given to the user if the application is authorised. The token will only be issued when it is exchanged for an authorization code. If the authorization code is not exchanged, the access token will expire after 10 minutes.

If the user successfully identifies himself and authorizes the application, the authorization page should redirect to an endpoint on the API Gateway. By default this is located at /callback, but it can easily be changed within the Nginx config files to suit your needs.

Let's take a look and see how to set this up!

Below you can find a screenshot of the proxy configuration page. You can access it from the API > Integration section of your 3scale admin portal. If you have used this page before for setting up your proxy with authentication methods other than oauth, you will notice that there is a new field required: oAuth login URL.

OAuth Nginx Settings
To proceed with the installation you will need to follow most of the same steps as with the basic Nginx proxy integration.

The API backend is the endpoint host of your API. For instance, if you were Twitter the API backend would be http://api.twitter.com/, or if you are the owner of the Sentiment API it would be http://api-sentiment.3scale.net/.

The proxy will redirect all traffic from your sandbox development endpoint to your API backend after all authentication, authorization, rate limits and statistics have been processed.

This will be the url that your users are presented with when they need to log in to your service to authenticate themselves.

The new Nginx OAuth extension allows Nginx to act as an OAuth provider. However, you still need to take care of providing an authorization page for users to login and approve/reject third party application access. This authorization page should be behind a login so a user can be identified and authenticated. Once the approval is done you will need to redirect your logged in user to the callback endpoint where the API gateway will take care of the rest of the workflow.

3scale automatically generates all the files needed to use Nginx as your API gateway and OAuth provider based on the data you input into the Proxy Integration Page. Once you have entered all the required information you can download these files and install them on your own Nginx Server.

If you are familiar with Nginx it should take no time to get your proxy up and running locally. Note that your Nginx installation must have the Lua plugin and that for some of the OAuth2 grant types, you must also have Redis installed on your server.

If you are not familiar with Nginx we recommend that you install the fantastic OpenResty web application that is basically a bundle of the standard Nginx core with almost all of the 3rd party Nginx modules that you will need built-in.

So, let's get started.

For Debian/Ubuntu linux distribution you should install the following packages using apt-get:

sudo apt-get install libreadline-dev libncurses5-dev libpcre3 libpcre3-dev libssl-dev perl
sudo apt-get build-dep nginx

For different systems check out the OpenResty documentation.

Download the code and compile it, change VERSION with your desired version (we run 1.2.3.8)

wget http://agentzh.org/misc/nginx/ngx_openresty-VERSION.tar.gz
tar -zxvf ngx_openresty-VERSION.tar.gz
cd ngx_openresty-VERSION/

./configure --prefix=/opt/openresty --with-luajit --with-http_iconv_module -j2

make
make install

At this point, we have Nginx + LUA installed via the excellent OpenResty bundle.

Download and install redis on Nginx server (we use version 2.6.16 which is the currently stable version at the time of writing this)

tar zxvf  redis-VERSION.tar.gz  
cd redis-VERSION
make
sudo make install

In order to to install and run redis server you will need to run the following, accepting all the default values:

sudo ./utils/install_server.sh

Please note that only the Authorization Code (Server-Side) Grant Flow is currently available from the Nginx Integration Wizard. However, you can find nginx config templates for all other flows on our github repository here.

Download the proxy configuration files from 3scale by clicking the Download button. This will give you a zip file with six files inside:

  • authorize.lua - This file contains the logic for authorizing the client, redirecting the end_user to the oAuth login page, generating the access token and checking that the return url matches the one specified by the API buyer. It runs when the /authorize endpoint is hit.
  • authorized_callback.lua - This file contains the logic for redirecting an API end user back to the API buyer's redirect url. As an API provider, you will need to call this endpoint once your user successfully logs in and authorizes the API buyer's requested access. This file gets executed when the /callback endpoint is called by your web application.
  • get_token.lua - This file contains the logic to return the access token for the client identified by a client_id. It gets executed when the /oauth/token endpoint is called.
  • nginx_*.conf - The .conf is a typical Nginx config file. Feel free to edit it or to copy paste it to your existing .conf if you are already running Nginx.
  • nginx_*.lua - This file contains the logic that you defined on the web interface to track usage for various metrics and methods.
  • threescale_utils.lua
  • Before going ahead, there are a couple of things you need to do.

    1. Modify the .conf file:

      You should change the server directive from your sandbox endpoint (typically api-xxx.staging.apicast.io) to your new developer frontend on your own domain.

       server {
          listen 80;
          server_name api-xxx.staging.apicast.io;
          underscores_in_headers on;
          ...
       }
      

      There is no need to define the server directive if you have only one domain. If server_name is defined, only the requests to that domain will be processed by Nginx. You must either change it or remove it.

      Furthermore if you are running multiple services within 3scale each service has its own domain, so you must change them all.

    2. Specify the location of your .lua files in your filesystem.

      Warning! The .lua file must be accessible by the user running the nginx worker processes (typically www-data on ubuntu, nobody on Mac OS X). otherwise the nginx workers will not be able to load the file when processing your incoming API requests.

         access_by_lua_file /PATH/YOUR-LUA-FILE.lua;
      

      The .conf has reminders of the lines that you must change, you can do a search on "CHANGE" to find all the lines that should be modified.

      You can always modify any of these files to add new features and/or handle custom requirements that are not supported by 3scale's proxy web interface.

      Additionally, you should copy the threescale_utils.lua file to /opt/openresty/lualib

    3. Change the redirect url in your application code from the sandbox proxy to your nginx host.

The only thing left is to start the Nginx based API Gateway. There are many ways, the most straight-forward is:

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/YOUR-CONFIG-FILE.conf

The example assumes that the working directory of nginx is /opt/openresty/nginx which is the path we passed during the installation to configure --prefix=/opt/openresty. You can change it but be aware of the user privileges.

The example also assumes that the .conf generated by 3scale is placed at /opt/openresty/nginx/conf/. Naturally, you should place the files and the directories at the location that best suits your production environment as well as to start and stop the process as a system daemon instead of by executing the binary directly.

To stop a running nginx:

sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/YOUR-CONFIG-FILE.conf -s stop

The option -s let you pass a signal to nginx. The process that will be stop is the one whose pid is stored in /opt/openresty/nginx/logs/nginx.pid.

The logs of nginx are by default in the same directory /opt/openresty/nginx/logs/ It is highly advisable to check the error.log when setting the whole process.

We are almost ready to roll. The best way to test that your API now supports OAuth is to use Google's excellent OAuth playground: https://developers.google.com/oauthplayground

You will need to set the redirect url for the application you want to use to test this to the google OAuth playground url: https://developers.google.com/oauthplayground

You can then fill in the settings as per the screenshot below:

Google OAuth Playground Settings

Where the authorization and token endpoint urls are your proxy urls. In the scope you should put the name of the Application Plan for the application e.g Default.

You can then click on "Authorize API" which will re-direct you to your login url. You can then login to a user account on your application and authorize the Application. Once that is done, you will be re-directed back to the Google OAuth playground with an authorization code.

You should then exchange this for an access token, and that's it! You now have an access token to call protected endpoints on your API.

You can now make a request to your API, but replacing your API backend hostname (in the example api-sentiment.3scale.net) by your proxy endpoint and adding the access_token parameter. e.g

curl -X GET "http://{YOUR_PROXY_HOST}/read?access_token={Your Access Token}"

And that's it! You have your API integrated with 3scale.


This section covers the advanced settings option of 3scale's API gateway in the staging environment.

For security reasons any request from 3scale's proxy to your API backend will contain a header called X-3scale-proxy-secret-token. The value of this header can be set by you in the Authentication Settings on the Integration page.

Proxy secret token

Setting the secret token will act as a shared secret between the proxy and your API so that you can block all API requests that do not come from the proxy if you so wish. This gives an extra layer of security to protect your public endpoint while you are in the process of setting up your traffic management policies with the sanxbox proxy.

Your API backend must have a public resolvable domain for the proxy to work, thus anyone that might know your API backend could bypass the credentials checking. Because the API gateway in the staging environment is not meant for production use that should not be a problem, but it's always better to have a fence available.

The API credentials within 3scale are always user_key or app_id/app_key depending on the authentication mode you are using (OAuth is not available for the API gateway in the staging environment). However, you might want to use different credentials names in your API. In this case you will need to set custom names for the user_key if you are using API key mode:

Custom user_key

or for the app_id and app_key:

Custom app_key/app_id

For instance you could rename app_id to key if that fits better your API. The proxy will take the name key and convert it to app_id before doing the authorization call to 3scale's backend. Note that the new credential name has to be alphanumeric.

You can decide whether your API passes credentials in the query string (or body if not a GET) or in the headers.

Proxy Credentials Location

Another important aspect to have a full-fledged configuration is to define your own custom error messages.

It is important to remark that 3scale's API gateway in the staging environment will do a pass through of any error message generated by your API. However, because the management layer of your API is now carried out by the proxy there are some errors that your API will never see since such requests will be terminated by the proxy.

Custom Error Messages

These errors are the following:

  • Authentication failed: this error will be generated whenever an API request does not contain valid credentials. This can be because the credentials are fake, because the application has been temporarily suspended, etc.
  • Authentication missing: this error will be generated whenever an API request does not contain any credentials. This occurs when users forget to add their credentials to an API request.
  • No match: this error means that the request did not match any mapping rule, therefore no metric is updated. This is not necessary an error but it means that either the user is trying random paths or that your mapping rules do not cover legitimate cases.

Setting up the proxy configuration is easy, but still some errors can occur on the way. For those cases the proxy can return some useful debug information that will be helpful to track down what is going on.

To enable the debug mode on 3scale's API gateway in the staging environment you can add the header

with your provider key to a request to your proxy. When the header is found and the provider key is valid, the proxy will add the following information to the response headers:

Basically, X-3scale-matched-rules tells you which mapping rules have been activated by the request, note that it is a list. The header X-3scale-credentials returns the credentials that have been passed to 3scale's backend. And finally X-3scale-usage tells you the usage that will be reported to 3scale's backend.

You can check the logic for your mapping rules and usage reporting in the Lua file, in the function extract_usage_x(), where x is your service_id.

In this example the comment -- rule: /v1/word/{word}.json -- shows to what particular rule the Lua code refers to. Each rule has a Lua snippet like the one above. In case you were wondering, comments are delimited by -- , --[[, ]]-- in Lua, and with # in Nginx.

Unfortunately, there is no automatic rollback for Lua files in case you make any changes. However, in case your current configuration is not working, while the previous one was OK, you can download the previous configuration files from the deployment history.

Deployment history

3scale's API gateway in the staging environment is quite flexible, but there are always things that cannot be done, either because the console interface does not allow it or because of security reasons due to a multi-tenant proxy.

If you need to extend your API proxy you can always download the proxy configuration and run it locally on your own servers. See the on-premise section in the Basic HowTo.

Needless to say that when you are running the proxy on-premise (on your own servers) you can modify the file to accommodate any custom feature you might need. Nginx with Lua is an extremely powerful open-source piece of technology.

We have written a blog post explaining how to augment APIs with Nginx and Lua. Some examples of extensions that can be done:

  • Basic DoS protection: white-lists, black-lists, rate-limiting at the second level.

  • Define arbitrarily complex mapping rules.

  • API rewrite rules, e.g. you might want API requests starting with /v1/* to be rewritten to /versions/1/* when they hit your API backend.

  • Content filtering, you can add checks on the content of the requests, either for security or to filter out undesired side effects.

  • Content rewrites, you can transform the results of your API.

  • Many, many more. Combining the power and flexibility of Nginx with Lua scripting is a winner combination.

Over time we will add recipes on how to achieve such extensions for your on-premise proxy. If you are ready for production you can always ping 3scale's support forums

Finally, one last note. If you use the configuration files in your on-premise proxy, there is the function log() function you can call during the process that will print the arguments in the nginx error.log file.


If you are running our Nginx-based API gateway in a production environment you might find some useful tips in this article. We have collected here a set of best practices for commonly asked questions.

The 3scale API gateway is based on the highly performant Nginx proxy. A single instance is able to handle large enough API traffic volumes to meet most customers' needs. However, it is a recommended practice for production environments to have multiple instances of the gateway in parallel. Such setup will avoid having a single point of failure on the API gateway layer, and it will also provision extra capacity to handle potential traffic spikes.

The 3scale API proxy is designed to make it really easy to set up a load-balanced environment. It is completely stateless, reaching to the 3scale backend service to perform all authorization tasks.

If you are currently operating with a single API gateway and you are looking to set up multiple instances in parallel you only need to:

  • deploy as many instances as you want following the instructions here
  • download your Nginx configuration files from 3scale
  • use the same set of files for all your proxies
  • make sure that the server_name is the same for all of them (this will be the public domain of your API, which will also be the domain that resolves to your load balancer in front of the gateways)

This tip is useful for those customers that are using DNS resolution for load balancing in their API backend. A typical example of this is if you are using AWS Elastic Load Balancing, which will return multiple different IP addresses when clients perform a DNS resolution.

By default, Nginx resolves the domain names of your backend servers only when it is started. It caches the IP and uses that when proxying incoming API requests. Of course, this will be a problem in the scenario discussed above, for two reasons:

  • it will send all the traffic to a single IP, effectively disabling the DNS load balancing
  • the cached IP might not exist anymore because your backend might have scaled down automatically removing some IP addresses of the pool

There is an easy fix for this: forcing Nginx to resolve the domain of the backend servers at runtime. The solution requires using the resolver directive to specify a DNS server:

 resolver 8.8.8.8;

That line uses Google's Public DNS servers but you can configure any other DNS servers, including your own in case you want to resolve private domain names.

The resolver directive has many useful options that you can learn about in the official documentation.

Bypassing the authorization step in case of network failure

The 3scale service management API is the service that responds to authorization requests sent by the API gateway. The availability of this service is our top priority and it has a very good track record of uptime.

Even if it is extremely rare event, there can be external circumstances that can cause your API gateway to not be able to reach the 3scale service management API (e.g. a problem in the network or in a corporate firewall).

The default behaviour of the API gateway in case the authorization request fails is to deny the incoming API call in order to prevent a potential security breach. However, this behaviour can be customized to fit your requirements.

For example, you could deny incoming API calls from all users except those that come from whitelist of mission critical applications. You can implement this behaviour in your configuration by changing the authrep function in your Lua file to match the one in this code snippet. Note that you also should create a whitelist.lua file with the list of app_id whose calls that should be allowed through. This file should be placed in the same directory as the other Nginx configuration files.

Note

We notify of any problems in our service as soon as they happen. In order to get timely status updates you should follow @3scalestatus on Twitter.


Most of the instructions are on the support pages indicated - we just discuss modifications for SOAP here.

Goal

At a end of this HowTo you will know what changes you need to make to your SOAP envelop and Nginx and Lua files to successfully integrate with 3scale.

SOAP Envelope

We will explain the changes that need to be made to the SOAP envelope with an example. Imagine that the call to the API backend is made with the following request:

curl -X POST -H "Content-Type: text/xml" -d @request.xml http://you-api.com/path

where request.xml follows the SOAP v1.2 structure and looks like this:

SOAP Envelope without 3scale

The elements necessary for integration with 3scale should be put in the SOAP header. Here is an example of the modified SOAP envelope.

The element t:ApiKey corresponds to the credentials that can be found on your Applications page, and t:method corresponds to the method defined in your SOAP body. This modified request is sent to the API proxy.

curl -X POST -H "Content-Type: text/xml" -d @request.xml http://your-nginx-instance:80

Nginx/Lua Configuration

  1. Follow the instructions here for Nginx configuration Nginx Proxy Configuration. For the moment, ignore the section Download your API proxy configuration from 3scale - we'll come back to that later.
  2. As well as installing the lua plugin as discussed in Nginx Proxy Configuration, you'll need to install an additional module named Luaxml:
    sudo apt-get install luarocks
    sudo luarocks install luaxml
    To verify, inside /usr/local/lib/luarocks/rocks, you should see luaxml installed. Additionally, you may use the command luarocks show luaxml to print the luaxml version.
  3. Now get back to Download your API proxy configuration from 3scale. After entering your Private API host*, you'll need to map SOAP operations to 3scale metrics.
    Soap Mapping
    Choose GET or POST as appropriate and in the Pattern box choose the SOAP operation you want to manage/measure/protect. Prefix the operation name with a /. Choose the metric you want to measure against. After you update and test, it may show errors - proceed with the config download anyway.
  4. Prior to starting Nginx as discussed in Nginx On Premises Setup, a few tweaks to your configuration need to be made.
  5. Modify the nginx_XXXXX.conf, following the instructions in section Download your API proxy configuration from 3scale on Nginx On Premises Setup
  6. Modify the file nginx_XXXXX.lua
    1. Add this snippet to the start of the file before the Service declaration.
    2. Change the signature and this line of method extract_usage_xxxxx as shown here: Or comparing old with new:
      Lua Declaration
    3. In the if ngx.var.service_id == block toward the end of the file, insert and change params.user_key = parameters["user_key"] as shown: Or comparing old with new:
      Lua Declaration
      The API key will be taken from the SOAP xml - discussed below
    4. Also in the if ngx.var.service_id == block, add the path to your API: ngx.var.proxy_pass = "http://backend_www.your-api.com/path/to/your/api"
    5. Also in the if ngx.var.service_id == block, change ngx.var.usage = extract_usage_xxxxxxxx(ngx.var.request)
      to
      ngx.var.usage = extract_usage_xxxxxxxx(ngx.var.request, xmlpayload)
  7. Now, when you copy the nginx config and lua to /usr/local/openresty/nginx/conf/ as discussed in Nginx Proxy Configuration, and you start the server, it should be ready to accept SOAP requests.
  8. Now, assuming nginx is running on localhost:80, your API integration with 3scale may be tested as follows:
    curl -H "Content-Type: text/xml" -d @request.xml -X POST "http://localhost:80/" 

Troubleshooting

  • We assume your Service Authentication Mode Settings are API Key. If you choose app_id/app_key or OAuth mode, you'll need to modify your configuration and SOAP headers slightly. Wherever they are by default taken from HTTP headers or parameters in the lua file, call the extract_value method to take them from the XML instead.
  • If you can't read the incoming request SOAP xml, you may need to upgrade your version of lua to the latest. If this is the case, you'll see an error around code snippet 6.c above of the lua. Upgrade with the following command:
    sudo apt-get install lua5.2
  • If you are experiencing any issues, it can be useful to enable debugging to the server console. In nginx.conf, line 9 (before events) add the following line:
    Lua Declaration
    This will enable you to write log statements to logs and console in lua file as follows: log(" debug message") to dig deeper into any issues you may have.


3scale offers a framework to create interactive documentation for your API just like the interactive documentation of 3scale APIs.

With Swagger 2.0 (based on the Swagger Spec) you will have functional and eye-candy documentation for your API. The interactive documentation will help your developers to explore, to test and to integrate with your API.ser

Every minute spent making your developers happy is a minute well invested on your API :)

If you are looking for documents describing Active Docs in the older version 1.0 and 1.2 go to our Active Docs Legacy documents.

At the end of the section you will have the ActiveDocs setup for your API.

Click on the “API” > “ActiveDocs” tab in your control panel. This will lead you to the list of your Service Specs (initially empty).

List of Service Specs in ActiveDocs

You can add as many Service Specs as you desire, typically, each Service Specs corresponds to one of your APIs. For instance, at 3scale we have four different specs, one for each API of 3scale : Service Management, Account Management, Analytics and Billing.

When you add a new Service Spec, you will have to provide: name, a system name (required to reference the Service Spec from the Developer Portal), whether you want the spec to be public or not, a description that is only meant for your own consumption, and finally you must provide the API JSON spec that you can see in the figure below.

NOTE – the API JSON spec is the “secret” ingredient of the whole ActiveDocs.

You must generate the specification of your API according to the spec proposed by Swagger . In this HowTo we assume that you already have a valid Swagger 2.0-compliant specification of your API.

Create a Service Spec for ActiveDocs

Once you created the first API on the ActiveDocs by adding a Service Spec, you can see it listed on the “API” > “ActiveDocs” tab.

You can edit it whenever necessary, delete it or switch them from public to private.

List of Service Specs for ActiveDocs

You can also preview how the ActiveDocs looks like by clicking on the name you gave to the Service Spec (in the example we called it Pet Store). You can do this even if the spec is not public yet.

This is what your ActiveDoc will look like!

List of Service Specs for ActiveDocs

Once you are happy with your Swagger, it’s time to make it public and link it on your Developer Portal so that it can be used by your API developers.

For that purpose, you will have to add the following snippet in the content of any page of your Developer Portal. This must be done via the CMS of your Developer Portal. Note that SERVICE_NAME should be the System Name of the Service Spec, pet_store in our example.

  • You can specify only one service on one page, if you want to display multiple specifications the best way is to do it on different pages.
  • This snippet requires jQuery, which typically is already included in the Main Layout on your Developer Portal. If you remove it from there make sure you add the jQuery dependency on the page with ActiveDocs.
  • Make sure you have Liquid tags enabled on the CMS page.
  • The version used in the Liquid tag {% active_docs version: "2.0" %} should correspond to that of the Swagger spec.
  • Ensure that your own server has CORS installed and the developer site is allowed.
  • If you would like to fetch your specification from an external source, change the JavaScript code as follows:
    $(function () {
     window.swaggerUi.options['url'] = "SWAGGER_JSON_URL";
     window.swaggerUi.load();
    });
    You can see an example in the snippet in Step 4 on line 14. Just make sure that this line is not inside the comments block.

And that’s it, simple enough isn’t it? :-)


This section will help you to create a Swagger 2.0-compliant spec for your restful API, which is required to power the ActiveDocs on your Developer Portal.

If you only would like to read code, all the examples are on the following page Swagger Petstore example source code.

3scale ActiveDocs are based on the specification of restful web-services called Swagger (from Wordnik). This example is based on the Extended Swagger Petstore example and draws all the specification data from the Swagger 2.0 Specification document.

Swagger is not only a specification. It also provides a full feature framework around it. Namely,

  1. Servers for the specification of the resources in multiple languages (NodeJS, Scala, etc.).
  2. A set of HTML/CSS/Javascripts assets that take the specification file and generate the eye-candy UI.
  3. A swagger codegen project, which allows generation of client libraries automatically from a Swagger-compliant server. Support to create client-side libraries in a number of modern languages.

3scale’s ActiveDocs is not a Swagger replacement but rather an instantiation of it. With ActiveDocs you do not have to run your own Swagger server or deal with the UI components of the interactive documentation. The interactive documentation is served and rendered from your 3scale Developer Portal.

The only thing that you need to have is to build a Swagger-compliant specification of your API, add it on your Admin Portal and the interactive documentation will be all set. Your developers will be able to launch requests against your API through your Developer Portal.

If you already have a swagger-compliant spec of your API, you can just add it in your Developer Portal (see the HowTo on the ActiveDocs Configuration to learn how).

3scale extended the Swagger specification in several ways to accommodate certain features that were needed for our own interactive API documentation:

  1. Auto-fill of API keys
  2. Swagger proxy to allow calls to non-CORS enabled APIs

We recommend that you read first the original spec from the original source:

Swagger Specification.

On the Swagger site there are multiple examples of specs. If you like to learn by example you can follow the example of the Petstore API by Swagger API Team.

The Petstore API is an extremely simple API. It is not meant for production but for dissemination and learning.

The Petstore API is composed of 4 methods:

  • GET /api/pets Returns all pets from the system
  • POST /api/pets Creates a new pet in the store
  • GET /api/pets/{id} Returns a pet based on a single ID
  • DELETE /api/pets/{id} Deletes a single pet based on the ID

Because we have the Petstore integrated with 3scale API Management we have to add an additional parameter for authentication. We chose the standard User Key authentication method (there are others) sent in the headers.

Consequently, we need to add the parameters:

user_key: {user_key}

The user_key will be sent by the developers in their requests to your API. The developers will obtain those keys on your Developer Portal.

Upon receiving the key, you will have to do the authorization check against 3scale using the Service Management API (This is diverting a bit from the ActiveDocs and getting into the Integration :-) Check the guide on integration and API lifecycle for more details)

Summing up, for your developers the documentation of your API represented in cURL calls would look like this:

However, if you want the documentation to look sexy like this: Swagger Petstore Documentation then you will have to create the following Swagger-compliant spec:

You can use this spec out-of-the-box to test your ActiveDocs. But remember that this is not your API :-).

At first it might look a bit cumbersome, but the Swagger spec is not complex at all. Let’s dissect it a bit.

The Swagger specification relies on a resource declaration that finally maps to a Hash encoded in JSON. Let’s take the above petstore3scale.json as example an go step by step…

This is the root document object for the API specification. It lists all the highest level fields.

WARNING – The host must be a domain and NOT and IP address. 3scale will proxy the requests made against your Developer Portal to your host and render the results. This requires your host and basePath end-point to be white-listed by us for security reasons. You can only declare a host that is your own: 3scale reserve the right to terminate your account if we detect that you are proxying a domain that does not belong to you. Notice that this means that localhost or any other wildcard domain will not work.

Troubleshooting: If your app is hosted on Amazon, and the proxy is not working, please check out an alternative to using the CNAME.

The Info object provides the metadata about the API. This will be presented in the ActiveDocs page.

Holds the relative paths to the individual endpoints. The path is appended to the basePath in order to construct the full URL. The Paths may be empty, due to ACL constraints.

Parameters which are not objects use primitive data types. In Swagger those are based on the types supported by the JSON-Schema Draft 4. There is an additional primitive data type "file" but it will work only if the api endpoint has CORS enabled (so the upload won’t go via api-docs proxy). Otherwise it will get stuck on the proxy level.

Currently Swagger supports the following dataTypes:

  1. "integer" with possible formats: "int32" and "int64". Both formats are signed.
  2. "number" with possible formats: "float" and "double"
  3. "string" with possible formats, besides the unformatted version: "byte", "date", "date-time", "password"
  4. "boolean"

The JSON Editor Online is quite good, it gives a pretty format to compact json and it also provides a browser of the JSON object. We really recommend it if you are not well versed with the JSON notation.

Another great tool is the Swagger Editor. It lets you create and edit your Swagger API specification written in YAML inside your browser and preview it in real time. You can also generate a valid JSON spec, which you can upload later in your 3scale admin panel. You can either use the live demo version with limited functionality, or deploy your own Swagger Editor.

Extension to the Swagger spec: Auto-fill of API keys

A very useful extension to the Swagger spec of 3scale’s ActiveDocs is the auto-fill of the API keys. On the parameters, you can define the field x-data-threescale-name with values app_ids, app_keys or user_keys depending on the authentication mode your API is in.

For instance, for the authentication mode App ID/ App Key you might want to declare "x-data-threescale-name": "app_ids" for the parameter that represents the application ID, and "x-data-threescale-name": "app_keys" for the parameter that represents the application key. Just like in the following snippet:

If you do so, ActiveDocs will automatically prompt the user of the ActiveDocs to log in to the Developer Portal to get their keys as shown in the screenshot below:

Auto-fill when not logged-in

Or if the user is already logged in, it will show the latest 5 keys that could be relevant for him so that he can test right away without having to copy and paste their keys around.

Auto-fill when logged-in

The field x-data-threescale-name is an extension to the Swagger spec which will be ignored outside the domain of ActiveDocs.


If you have an OAuth enabled API you will want to show off its capabilities to your users. But how can you do this using your ActiveDocs? Well, even though this is a bit trickier than usual, it isn't entirely impossible and we're going to show you how.

After completing these steps you will have a set of ActiveDocs that will allow your users to easily test and call your OAuth enabled API from one place.

Before you start these steps, you will need to have configured the required OAuth endpoints for your chosen workflow. If you are using Nginx as your API proxy you will need to have followed the steps in the Set up OAuth with Nginx API Gateway proxy How To. Additionally, you will need to be familiar with how to set up ActiveDocs: Configure ActiveDocs/Swagger and ActiveDocs/Swagger Specification.

Our first example is for an API using the OAuth2 Client Credentials flow. This API simply returns the sentiment value (from 5 to -5) for a given word. Sentiment API is only accessible using a valid access token. Users of our API are only able to call it once they have exchanged their credentials (client_id and client_secret) for an access token.

In order for users to be able to call our API from ActiveDocs they will need to request an access token. Since this is just a call to an OAuth Authorization server, we can create an ActiveDocs Spec for the OAuth Token endpoint. This will allow us to call this endpoint from within ActiveDocs. In our case, for a client credentials flow, our Swagger JSON spec looks like below:

For a resource owner OAuth flow, you will probably also want to add parameters for a user name and password as well as any other additional parameters that you require in order to issue an access token. For our client credentials flow example, we are just sending the client_id and client_secret – which can be populated from the 3scale values for signed in users – as well as the grant_type.

Then, in the ActiveDocs spec for our Sentiment API we need to add the access_token parameter instead of the client_id and the client_secret.

We can then include our ActiveDocs in the Developer Portal as per usual. In this case, since I want to specify the order in which they display to have the OAuth endpoint first, it looks like this:


Most of the instructions applicable to configure ActiveDocs 2.0 are shown on the pages Configure ActiveDocs/Swagger and ActiveDocs/Swagger Specification. The detailed spec related differences can be found on the official Swagger 1.2 to 2.0 Migration Guide. This article simply documents the extra steps to upgrade to ActiveDocs 2.0.

If your ActiveDocs spec is still in version 1.0, then please first convert it to version 1.2 as described here: Upgrade to ActiveDocs 1.2.

Goal

At the end of this HowTo you will know what changes you need to make to your ActiveDocs Configuration to successfully upgrade to version 2.0.

Click on the “API” > “ActiveDocs” tab in your control panel. This will lead you to the list of your Service Specs. You should have already added a Service Spec (see Step 2 in Configure ActiveDocs/Swagger.

Naming your specification

You should apply appropriate names to achieve the desired effect in your Developer Portal - the heading of your ActiveDocs API listing will appear as System name: Description. You may need to recreate the spec again simply by copying the JSON spec and other fields as the System Name is read-only.

The specification for ActiveDocs 2.0 has some important changes to those for version as compared to version 1.2. See the Swagger 1.2 to 2.0 Migration Guide for detailed information. The most important changes are:

  • the "swaggerVersion": "1.2" root element is now "swagger": "2.0" and it is a required field
  • the "info" object becomes required
  • the "apiVersion": "1.0" becomes required and is now part of the "info" object: "info": { "version": "1.0", ... }
  • the description in the "info" object becomes non-mandatory
  • the license name field becomes required if "license" object is present
  • the "basePath": "https://example.com/api" field is split into three fields: "host": "example.com", "basePath": "/api" and "schemes": [ "http" ]. None of these fields is mandatory

The following code snippet should be added to your CMS page, where SERVICE_NAME should be the System Name of the Service Spec.
Optionally, if you want to include multuple swagger specs on one page you may use this customized snippet:

  • Remember to enable Liquid tags on your CMS configuration page
    Enable Liquid tags
  • Finally, while in the preview mode, you'll need to close the right hand vertical sidebar to see ActiveDocs 2.0.

The new styles are compliant with the newer swagger spec (2.0). If you would like to change the look and feel, you would have to override the styles. Please, mind that as the css for swagger is injected together with the html, you would have to define the styles with a higher specificity or with the !important tag.


In order to support a formal development lifecycle for your developer community, you may want to provide separate staging and production environments to access your API. Thus during development and testing the API may be used without the adverse consequences of operating in a production environment. Note this is different from managing the dev, test, stage, deploy cycle of the API itself.

By the end of this HowTo you will be able to setup 3scale to differentiate between your production and staging environments.

There are several options to provide differentiated staging and production environments for your developer community - 3scale supports a lot of flexibility on how to implement this. Once you decide on which approach is right for you, you could implement this within the Nginx proxy as a custom extension within the config files.

This tutorial describes two ways of providing differentiated environments:

This option is simple to setup and simple during operational use. The main limitations are that both environments share the same backend, and in reporting views the production and staging traffic is mixed together

In this option you would create one Application plan for each environment; and set the rate limits and availability of methods/metrics depending on the environment (setting rate limits). For example in the staging mode to provide lower rate limits and optionally restrict access to any methods that are not desirable for the staging, for example expensive resources, or write/delete methods.

Environment-specific rate limits

On your integration side you would have to implement the mapping to the corresponding 3scale methods. Remember that this is only simulating environments and not hitting two different endpoints.

For example assuming we have a developer app under the staging plan which is restricted from "write" calls, this is the auth call to 3scale representing the mapping for POST to /words.....:

and the response will be 409 with the body:

Whenever desired the plan can be upgraded from staging to production without coding changes by the developer: 1) a self-service plan change in the developer portal, 2) a request to the API provider to make the plan change, 3) a plan change determined unilaterally by the API provider. This will depend on your Service settings.

This option allows differentiation of the API backend for each environment. The operational use is just as simple as option 1. The main different is that the implementation is slightly more complicated (requiring custom modifications to the Nginx config files). Due to the need for Nginx to parse the response bodies there will also be a performance hit.

In this scenario the backend provides different response logic for the two modes, but the developer should not have to make any coding changes to switch between the two environments. This is achieved in 3scale by use of the Nginx proxy to route calls based on the authorization response from 3scale indicating whether production calls are enabled or disabled in the respective application plan. For example when an app under the staging plan makes a call, the proxy does the auth request to 3scale not knowing if this is a call to staging or production. The call might look like this: and the response is parsed for 'plan' to determine whether to route the call to the staging or production backend

Next steps assume you have two environments called exactly "Staging" and "Production". If you want to use different names, just define them appropriately in the API section of your admin's console and verify the returned value from the authorization call

On the Nginx side you would have to apply some modifications to the configuration files generated by 3scale. First define a new upstream, e.g. Next assign the server name for your services (not obligatory if you have only one server): where YOUR-SERVICE-DOMAIN-FOR-NGINX will be the domain(s) assigned to the server where Nginx is hosted. Having this we will have to specify the '.lua' file path on your server in the 'location: /' part of the config file:

With the '.conf' file customization finished we can pass to the '.lua' file, where the logic responsible for conditional proxy pass resides. Find the line starting with function authrep(params, service) and inside that function definition apply the following changes:

  • Comment out lines like here: note that the second line is NOT COMMENTED
  • Just after this commented lines add the following code:
This code uses a regular expression to match the response with the plan's name. If you want to experiment more with regular expressions try Rubular.

Now the calls registering usage in certain environments will automatically hit these environments without any additional changes on the developer's side. As soon as the Application Plan is switched from Staging to Production or vice versa, the API calls will automatically be re-routed to the correct backend.

if any of your environments reside on a hosted server instance (e.g. heroku, google app, etc.) you will have to do a hostname rewrite (send a proper host name in the headers). To do that add the following lines:

  • in '.conf' file under the 'location: /' part add set $host null;
  • in '.lua' file in the code injected by you add:
    ngx.var.host = "STAGING-HOSTNAME" in the if plan == "Staging" condition (STAGING-HOSTNAME can be e.g. 'application.herokuapp.com')
    ngx.var.host = "PRODUCTION-HOSTNAME" in the if plan == "Production" condition.


Tracking response codes from your API is a great way to see how your clients are using the API and also if everything is fine with your servers in real time.

Although this functionality is available for free on all the plans, keep in mind that it also is a part of Request Logs feature. It gives the ability to log not only your response codes but as well the requests and response body (and any other part of requests/responses for that matter). This is available on Pro plan and above.

Goal

This HowTo shows how to setup and use response codes log in the 3scale system. We will walk through the configuration steps and then show how to use this feature in a long run.

Setting up response codes logging is really simple. If you are using the Nginx on-premise proxy, then the configuration files which you download from our servers after defining the mapping on the integration screen will already have all the necessary code inside.

In case you have integrated your API with 3scale using the plugin or API you will have to add some additional code. Because the response code is known after receiving a response from the API itself, you won't be able to use the Authrep call here. Instead you will have to split the Authrep into two separate calls: Authentication - to authenticate the request to the API, and Report - to both report the usage and to log the response code into the 3scale system.

Because the code itself depends on the language, we will show how the codes are logged in using the 3scale Service Management REST API. For the plugins please refer to their respective github repositories and adjust reporting accordingly to the RESTful example below.

In order implement a correct flow with Response Code reporting you will have to split the Authentication and Reporting process into two steps. In the first step you will authorize the call against some usage, i.e.:

curl -v  -X GET "https://su1.3scale.net/transactions/authorize.xml?provider_key=PROVIDER_KEY&user_key=USER_KEY&usage%5Bhits%5D=1"
The second step, after receiving a successful authentication respose from 3scale, will be processing the call in your API and then reporting the usage together with the Response Code value. For this example let's just assume it will be a successfull 200 response. The call to your API will then look like this:
curl -v -X POST "https://su1.3scale.net/transactions.xml" -d 'provider_key=PROVIDER_KEY&transactions%5B0%5D%5Buser_key%5D=USER_KEY&transactions%5B0%5D%5Busage%5D%5Bhits%5D=1&transactions%5B0%5D%5Blog%5D%5Bcode%5D=200'
The part of the code responsible for registering the response codes is the last transaction part: transactions%5B0%5D%5Blog%5D%5Bcode%5D=200 reporting, in this case, a 200 response code. The code above is just url-encoded version of transactions[0][log][code]=200

In other words, you have to report an additional transaction element - a code value part of a request log. The transactions[0] means that this code value is a part of the first transaction in a batch (in this case we are reporting only one transaction). The latter part of the hash is referring to response code value of the request log.

Ok, so you have setup reporting of the response codes from your API into 3scale. How to verify if it went well and monitor the responses? First of all you will have to call your API (through 3scale traffic agent like Nginx proxy or plugin) with valid 3scale credentials. Then verify, that the call was correctly reported on the Analytics -> Usage page:

If everything went fine so far, then go to the Analytics->Response codes page. You should be able to see a graph with your latest traffic divided by colors depending if the response was 2xx, 4xx or 5xx.

The graph tool gives you a possibility to view the history of response codes. The graph gives you a possibility to view the response codes statistics for different periods of time and different granularity. You can choose either the default values or go to the 'custom' tab and define the time period and granularity that will fit your needs.

If you are on Pro plan and above you are also able to see a more detailed statistics for response codes. On the Request log tab you can see the exact code which was logged, its time stamp and the rest of the request log information you have reported. It even goes beyond the response codes and gives you an ability to log full request and response parameters.


When things go wrong with your API, it can sometimes be difficult to identify where a problem lies. The following guide aims to help you identify and fix the cause of issues with your API infrastructure.

API Infrastructure can be a lengthy and complex topic. However, at a minimum you will have 3 moving parts in your Infrastructure:

  1. The API Gateway
  2. 3scale
  3. Your API

                                           +----------------+
                                           |                |
                                           |                |
                                           |     3scale     |
                                           |                |
                                           |                |
                                           +-------+--------+
                                                   ^
                                                   |
                                                2  |
                                                   |
                                                   |
            +---------------+             +--------+--------+                +-----------------+
            |               |             |                 |                |                 |
            |               |      1      |   API Gateway   |       3        |                 |
            |  Client       +------------>+---------------->+--------------->+    API          |
            |               |             |                 |                |                 |
            |               |             |                 |                |                 |
            +---------------+             +-----------------+                +-----------------+

Errors between any of those 3 elements will result in your clients not being able to access your API. However, it won't always be clear exactly which component caused the failure. This guide aims to give you some tips to troubleshoot your infrastructure in order to identify where things may be going wrong.

We will start with some common scenarios first, before moving on to more sophisticated troubleshooting when it's not clear exactly where the problem lies.

There are a number of symptoms that can point to some very common issues with your integration with 3scale. These will vary depending on whether you are at the beginning of your API project, setting up your infrastructure or are already live in production.

Integration Issues

The following sections attempt to outline some common issues you may encounter during the initial phases of your integration with 3scale: at the beginning using APIcast and prior to go live running the API Gateway on Premise.

APIcast

When you are first integrating your API with APIcast in the Service Integration screen you might get some of the following errors shown on the page, or returned by the "Test" call you make to check for a successful integration.

  • Test request failed: execution expired
    • Check your API is reachable from the public internet. APIcast cannot be used with private APIs. If you are not comfortable about making your API publicly available in order to integrate with APIcast, you can always set up a private secret between APIcast and your API to reject any calls not coming from the API Gateway.
  • The accepted format is 'protocol://address(:port)'
    • Remove any paths from the end of your API's "Private Base URL." You can add these in the "Mapping Rules" Pattern or at the beginning of the "API test GET request."
  • Test request failed with HTTP code XXX
    • 405: Check that the endpoint accepts GET requests. APIcast only supports GET requests to test the integration.
    • 403: Authentication parameters missing: If your API already has some authentication in place, APIcast will not be able to make a test request.
    • 403: Authentication failed: If this is not the first Service you have created with 3scale, check that you have created an application under the Service with credentials in order to make the test request. If it is the first Service you are integrating, check that you haven't deleted the test account/application that is created on signup.

On Premise

Once you have successfully tested the integration with APIcast you might want to host the API Gateway yourself.

These are some of the errors you might encounter when you first install your "On Premise" Gateway and call your API through it.

  • 500 Internal Server Error - Check nginx error.log:
    • failed to load external Lua file X: cannot open X: No such file or directory - Check:
    • lua file in correct directory
    • specified location correctly
    • file exists
    • lua entry thread aborted: runtime error: <path_to_library>: module 'X' not found:
    • Check you have installed all required dependencies
  • lua entry thread aborted: runtime error: access_by_lua:1: module 'X' not found:
    • Check that the lua files are in the correct location
  • upstream timed out (110: Connection timed out) while connecting to upstream
    • Check that there are no firewalls between the API Gateway and the public internet that would prevent it from reaching 3scale.

Some other symptoms that may point to an incorrect integration on premise are as follows:

  • API calls routed incorrectly: If you have multiple services pointing to different API backends you will need to ensure you update the server_name directive in each server block of your nginx.conf. By default, this will be set to something like *.staging.apicast.io;
  • Mapping Rules not matched / Double Counting of API calls: Depending on the way you have defined the mapping between methods and actual url endpoints on your API you might find that sometimes methods either don't get matched or get incremented more than once per request. A good way to troubleshoot this is to make a test call to your API with the 3scale debug header. This will return a list of all the methods that have been matched by the API call.
  • Authentication Parameters not found: Ensure your are sending the parameters in the correct location as specified in the Service Integration screen. Please note that if you don't choose for credentials to be sent as headers, they should be sent as query parameters for GET requests and body parameters for all other HTTP methods. You can use the 3scale debug header to double check the credentials that are being read from the request by the API Gateway.

Finally, you might see the following alerts on nginx startup:


2016/03/18 10:31:25 [alert] 9790#0: lua_code_cache is off; this will hurt performance in /path/to/nginx.conf:30

Whilst this setting is a good idea during integration as it allows making on the fly changes to your lua code without restarting nginx, it should be turned on before going live. This directive is available on every server block and should be changed for all producion server blocks.

Production Issues

It is unlikely you will have many problems with your API Gateway once you have fully tested your set up and been live with your API for a while. However, here are some of the sorts of issues you might encounter in a live production environment.

Availability Issues

Availability issues are normally characterised by seeing upstream timed out errors in your nginx error.log, e.g


upstream timed out (110: Connection timed out) while connecting to upstream, client: X.X.X.X, server: api.example.com, request: "GET /RESOURCE?CREDENTIALS HTTP/1.1", upstream: "http://Y.Y.Y.Y:80/RESOURCE?CREDENTIALS", host: "api.example.com"

If you are experiencing intermittent 3scale availability issues there could be a number of reasons for this:

  • You are resolving to an old 3scale IP that is no longer in use.

The latest version of the API Gateway configuration files defines 3scale as a variable to force IP resolution each time. For a quick fix, reload your Nginx instance. For a longer term fix, ensure that instead of defining the 3scale backend in an upstream block you are defining it as a variable within each server block e.g


server {
  # Enabling the Lua code cache is strongly encouraged for production use. Here it is enabled 
  .
  .
  .
  set $threescale_backend "https://su1.3scale.net:443";

and when you refer to it


  location = /threescale_authrep {
    internal;
    set $provider_key "YOUR_PROVIDER_KEY";

    proxy_pass $threescale_backend/transactions/authrep.xml?provider_key=$provider_key&service_id=$service_id&$usage&$credentials&log%5Bcode%5D=$arg_code&log%5Brequest%5D=$arg_req&log%5Bresponse%5D=$arg_resp;
  }
  • You are missing some 3scale IPs from your whitelist. This is the current list of IPs that 3scale resolves to:
    • 75.101.142.93
    • 174.129.235.69
    • 184.73.197.122
    • 50.16.225.117
    • 54.83.62.94
    • 54.83.62.186
    • 54.83.63.187
    • 54.235.143.255

The above issues refer to problems with perceived 3scale availability. However, you might encounter similar issues with your API availability from the API Gateway if your API is behind an AWS ELB. This is due to the fact that Nginx by default does DNS resolution at start up time and then caches the IP addresses. However, ELBs do not ensure static IP addresses and these might change from time to time. Whenever the ELB changes to a different IP, Nginx stops being able to reach it.

The solution for this is similar to the above fix for forcing runtime DNS resolution.

  1. Set a specific DNS resolver e.g Google's DNS. This is done by adding the following line near the top of the http section: resolver 8.8.8.8 8.8.4.4;
  2. Set your API base URL as a variable, anywhere near the top of the server section. set $api_base "http://api.signupgenius.com:80";
  3. Inside the location / section, find the line that says proxy_pass and replace it with the following: proxy_pass $api_base;

Post Deploy Issues

If you make any changes to your API e.g to add a new endpoint, you need to make sure you go through the steps to add a new method and url mapping before downloading a new set of configuration files for your API Gateway.

The most common problem when you have made modifications to the configuration downloaded from 3scale will be code errors in the lua which will result in a 500 - Internal server error e.g


$ curl -v -X GET "http://localhost/"
* About to connect() to localhost port 80 (#0)
*   Trying 127.0.0.1... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost
> Accept: */*
> 
< HTTP/1.1 500 Internal Server Error
< Server: openresty/1.5.12.1
< Date: Thu, 04 Feb 2016 10:22:25 GMT
< Content-Type: text/html
< Content-Length: 199
< Connection: close
< 

500 Internal Server Error

500 Internal Server Error


openresty/1.5.12.1
* Closing connection #0

You can then look into the nginx error.log to dig down into the cause, e.g


2016/02/04 11:22:25 [error] 8980#0: *1 lua entry thread aborted: runtime error: /home/pili/Nginx/troubleshooting/nginx.lua:66: bad argument #3 to '__newindex' (number expected, got nil)
stack traceback:
coroutine 0:
  [C]: in function '__newindex'
  /home/pili/Nginx/troubleshooting/nginx.lua:66: in function 'error_authorization_failed'
  /home/pili/Nginx/troubleshooting/nginx.lua:330: in function 'authrep'
  /home/pili/Nginx/troubleshooting/nginx.lua:283: in function 'authorize'
  /home/pili/Nginx/troubleshooting/nginx.lua:392: in function  while sending to client, client: 127.0.0.1, server: api-2445581381726.staging.apicast.io, request: "GET / HTTP/1.1", host: "localhost"

In the access.log this will look something like:


127.0.0.1 - - [04/Feb/2016:11:22:25 +0100] "GET / HTTP/1.1" 500 199 "-" "curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3"

Hopefully the above sections should give you a good overview of the most common, well-known issues that you might encounter at all stages of your 3scale journey.

If all of these have been checked, and you are still unable to find the cause and solution for your issue, you should proceed to the more detailed operational troubleshooting sections below. You should start at your API and work your way back to the Client in order to try to identify the point of failure.

Troubleshooting 101

If you are experiencing failures when connecting to a server, whether that is the API Gateway, 3scale or your API, the following troubleshooting steps should be your first port of call:

1. Can we connect?

Use telnet to check basic TCP/IP connectivity telnet api.example.com 443

  • Success

  $ telnet echo-api.3scale.net 80
  Trying 52.21.167.109...
  Connected to tf-lb-i2t5pgt2cfdnbdfh2c6qqoartm-829217110.us-east-1.elb.amazonaws.com.
  Escape character is '^]'.
  Connection closed by foreign host.
  • Failure

  $ telnet su1.3scale.net 443
  Trying 174.129.235.69...
  telnet: Unable to connect to remote host: Connection timed out

2. Is it me or is it them?

Try to connect to the same server from different network locations, devices and directions e.g if your client is unable to reach your API, try to connect to your API from a machine that should have access, such as the API Gateway.

If any of the attempted connections succeed, then you can rule out any problems with the actual server and concentrate your troubleshooting on the network between them as this is where the problem will most likely lie.

3. Is it a DNS issue?

Try to connect to the server by using its IP address instead of its hostname e.g telnet 94.125.104.17 80 instead of telnet apis.io 80

This will rule out any problems with the DNS.

You can get the IP address for a server using dig e.g for 3scale dig su1.3scale.net or dig any su1.3scale.net if you suspect there may be multiple IPs a host may resolve to.

NB: Some hosts block dig any

4. Is it an SSL issue?

You can use openssl to test:

  • Secure connections to a host or IP, e.g from the shell prompt openssl s_client -connect su1.3scale.net:443

Output:


CONNECTED(00000003)
depth=1 C = US, O = GeoTrust Inc., CN = GeoTrust SSL CA - G3
verify error:num=20:unable to get local issuer certificate
---
Certificate chain
 0 s:/C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net
   i:/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3
 1 s:/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3
   i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIE8zCCA9ugAwIBAgIQcz2Y9JNxH7f2zpOT0DajUjANBgkqhkiG9w0BAQsFADBE
...
TRUNCATED
...
3FZigX+OpWLVRjYsr0kZzX+HCerYMwc=
-----END CERTIFICATE-----
subject=/C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net
issuer=/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3
---
Acceptable client certificate CA names
/C=ES/ST=Barcelona/L=Barcelona/O=3scale Networks, S.L./OU=IT/CN=*.3scale.net
/C=US/O=GeoTrust Inc./CN=GeoTrust SSL CA - G3
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1:RSA+MD5
Shared Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 3281 bytes and written 499 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: A85EFD61D3BFD6C27A979E95E66DA3EC8F2E7B3007C0166A9BCBDA5DCA5477B8
    Session-ID-ctx: 
    Master-Key: F7E898F1D996B91D13090AE9D5624FF19DFE645D5DEEE2D595D1B6F79B1875CF935B3A4F6ECCA7A6D5EF852AE3D4108B
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    TLS session ticket lifetime hint: 300 (seconds)
    TLS session ticket:
    0000 - a8 8b 6c ac 9c 3c 60 78-2c 5c 8a de 22 88 06 15   ..l..<`x,\.."...
    0010 - eb be 26 6c e6 7b 43 cc-ae 9b c0 27 6c b7 d9 13   ..&l.{C....'l...
    0020 - 84 e4 0d d5 f1 ff 4c 08-7a 09 10 17 f3 00 45 2c   ......L.z.....E,
    0030 - 1b e7 47 0c de dc 32 eb-ca d7 e9 26 33 26 8b 8e   ..G...2....&3&..
    0040 - 0a 86 ee f0 a9 f7 ad 8a-f7 b8 7b bc 8c c2 77 7b   ..........{...w{
    0050 - ae b7 57 a8 40 1b 75 c8-25 4f eb df b0 2b f6 b7   ..W.@.u.%O...+..
    0060 - 8b 8e fc 93 e4 be d6 60-0f 0f 20 f1 0a f2 cf 46   .......`.. ....F
    0070 - b0 e6 a1 e5 31 73 c2 f5-d4 2f 57 d1 b0 8e 51 cc   ....1s.../W...Q.
    0080 - ff dd 6e 4f 35 e4 2c 12-6c a2 34 26 84 b3 0c 19   ..nO5.,.l.4&....
    0090 - 8a eb 80 e0 4d 45 f8 4a-75 8e a2 06 70 84 de 10   ....ME.Ju...p...

    Start Time: 1454932598
    Timeout   : 300 (sec)
    Verify return code: 20 (unable to get local issuer certificate)
---
  • SSLv3 support (NOT supported by 3scale)

openssl s_client -ssl3 -connect su.3scale.net:443

Output


CONNECTED(00000003)
140735196860496:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:s3_pkt.c:1456:SSL alert number 40
140735196860496:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:644:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : SSLv3
    Cipher    : 0000
    Session-ID: 
    Session-ID-ctx: 
    Master-Key: 
    Key-Arg   : None
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1454932872
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
---

These are just some examples, you can find some more details on usage in the Openssl man pages

You should go through the following checks in order to identify where an issue with requests to your API might lie.

API

To confirm that the API is up and responding to requests, try to make the same request directly to your API, i.e not going through the API Gateway. You should ensure that you are sending all of the same parameters and headers as the request that goes through the API Gateway. If you are unsure of the exact request that is failing you should capture the traffic between the API Gateway and your API.

If the call succeeds, you can rule out any problems with the API, otherwise you should troubleshoot your API further.

API Gateway > API

To rule out any network issues between the API Gateway and the API, try to make the same call as before, directly to your API, from your API Gateway server.

If the call succeeds, you can move on to troubleshooting the API Gateway itself.

API Gateway

There are a number of steps to go through in order to check that the API Gateway is working correctly.

1. Is the API Gateway up and running?

Try to login to the machine where the gateway is running - if this fails your Gateway server might be down.

Once you have logged in check the nginx process is running, you can do this by running ps ax | grep nginx or htop

If you see nginx master process and nginx worker process in the list, Nginx is running.

2. Are there any errors in the gateway logs?

Here are some common errors you might see in the gateway logs e.g in error.log

  • API Gateway can't connect to API

upstream timed out (110: Connection timed out) while connecting to upstream, client: X.X.X.X, server: api.example.com, request: "GET /RESOURCE?CREDENTIALS HTTP/1.1", upstream: "http://Y.Y.Y.Y:80/RESOURCE?CREDENTIALS", host: "api.example.com"
  • API Gateway can't connect to 3scale

2015/11/20 11:33:51 [error] 3578#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /api/activities.json?user_key=USER_KEY HTTP/1.1", subrequest: "/threescale_authrep", upstream: "https://54.83.62.186:443/transactions/authrep.xml?provider_key=YOUR_PROVIDER_KEY&service_id=SERVICE_ID&usage[hits]=1&user_key=USER_KEY&log%5Bcode%5D=", host: "localhost"

API Gateway > 3scale

Once we are sure the API Gateway is running correctly the next step is troubleshooting the connection between the API Gateway and 3scale

1. Can the API Gateway reach 3scale?

If you are using Nginx as your API Gateway, you should see the following in the nginx error logs when the gateway is unable to contact 3scale.


2015/11/20 11:33:51 [error] 3578#0: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 127.0.0.1, server: , request: "GET /api/activities.json?user_key=USER_KEY HTTP/1.1", subrequest: "/threescale_authrep", upstream: "https://54.83.62.186:443/transactions/authrep.xml?provider_key=YOUR_PROVIDER_KEY&service_id=SERVICE_ID&usage[hits]=1&user_key=USER_KEY&log%5Bcode%5D=", host: "localhost"

The main thing to note here is the upstream value. This IP corresponds to one of the IPs that the 3scale API Management service resolves to, so you know there is some problem reaching 3scale. You can do a reverse DNS lookup to check the domain for an IP by calling nslookup e.g

Just because the API Gateway is unable to reach 3scale, it does not necessarily mean that 3scale is down. One of the most common reasons for this would be firewall rules preventing the API Gateway from connecting to 3scale.

As such, there could be some network issues between the Gateway and 3scale that could be causing connections to timeout. If the above is happening, you should go through the steps in "Troubleshooting generic connectivity issues" to identify where the problem lies.

In order to rule out networking issues, you can use traceroute or mtr to check the routing and packet transmission. It might also be useful to run the same command from a machine that is able to connect to 3scale and your API Gateway and compare the output.

Additionally, to see the traffic that is being sent between your API Gateway and 3scale you can use tcpdump as long as you temporarily switch to using the http endpoint for the 3scale API Management Service ( su1.3scale.net.)

2. Is the API Gateway resolving 3scale addresses correctly?

Ensure you have the resolver directive added to your nginx.conf

e.g In nginx.conf


http {
  lua_shared_dict api_keys 10m;
  server_names_hash_bucket_size 128;
  lua_package_path ";;$prefix/?.lua;";
  init_by_lua 'math.randomseed(ngx.time()) ; cjson = require("cjson")';

  resolver 8.8.8.8 8.8.4.4;

You can also dig any su1.3scale.net to see the IP addresses currently in operation for the 3scale API Management Service. Please note that this is not the entire range of IP addresses that might be used by 3scale, as some might be swapped in and out for capacity reasons. Additionally, we may add more domain names for the 3scale API Management Service in the future, so you should always test against the specific address which will have been supplied to you during integration.

3scale

1. Is 3scale available?

Check @3scalestatus on twitter

2. Is 3scale returning an error?

It's also possible that 3scale is available but is returning an error to your API Gateway which would prevent calls going through to your API. Try to make the authorization call directly in 3scale and check the response. If you get an error, check the Error Codes section to see what the issue could be.

3. Use the 3scale debug headers

Another option is to turn on the 3scale debug headers by making a call to your API with the X-3scale-debug header. e.g

curl -v -X GET "https://api.example.com/endpoint?user_key" X-3scale-debug: YOUR_PROVIDER_KEY

This will return the following headers with the API response


< X-3scale-matched-rules: /, /api/contacts.json
< X-3scale-credentials: access_token=TOKEN_VALUE
< X-3scale-usage: usage[hits]=2
< X-3scale-hostname: HOSTNAME_VALUE
4. Check the Integration errors

You can also check the Integration errors on your Admin dashboard to check for any issues reporting traffic to 3scale. You can find this at https://YOUR_DOMAIN-admin.3scale.net/apiconfig/errors

Some common reasons for integration errors:

Client > API Gateway

1. Is the API Gateway reachable from the public Internet?

Try directing a browser to the IP address (or domain name) of your Gateway server. If this fails, make sure that you have opened the firewall on the relevant ports.

2. Is the API Gateway reachable by the Client?

If possible, try to connect to the API Gateway from the Client using one of the methods outlined earlier (telnet, curl, etc... ) If the connection fails, the problem lies in the network between the 2.

Otherwise, you should move on to troubleshooting the Client making the calls to the API.

Client

1. Test the same call using a different client

If a request is not returning the expected result, test with a different HTTP client. For example, if you are calling an API with a Java HTTP client and you see something weird cross check with cURL.

It might also help to call the API through a proxy between the client and the gateway to capture the exact parameters and headers being sent by the client.

2. Inspect the traffic sent by client

You can use a tool like Wireshark to see the requests being made by the client. This will allow you to identify whether the client is actually making calls out to the API and the details of the request.

Other Issues

Active Docs Issues

You might sometimes find that calls that work when you call the API from the command line, fail when going through Active Docs.

In order to enable Active Docs calls to work, we send these out through a proxy on our side. This proxy will add certain headers that can sometimes cause issues on the API if they are not expected. To identify if this is the case you can try the following steps:

1. Use petstore.swagger.io

Swagger provides a hosted swagger-ui at petstore.swagger.io which you can use to test your swagger spec and API going through the latest version of swagger-ui. If both swagger-ui and ActiveDocs fail in the same way, you can rule out any issues with Active Docs or the Active Docs proxy and focus the troubleshooting on your own spec. Alternatively, you can check the swagger-ui github repo for any known issues with the current version of swagger-ui.

2. Check firewall allows connections from Active Docs proxy

Our recommendation here would be not to whitelist IP address for clients using your API. The ActiveDocs proxy uses floating IP addresses for high availability and there is currently no mechanism to notify of any changes to these IPs.

3. Call the API with incorrect credentials

One way to identify whether the Active Docs proxy is working correctly is to call your API with invalid credentials. This will help you to confirm or rule out any problems with both the Active Docs proxy and your API Gateway.

If you get a 403 code back from the API call (or whichever code you have configured on your Gateway for invalid credentials) the problem lies with your API, as the calls are reaching your Gateway at the very least.

4. Compare calls

To identify any differences in headers and parameters between calls made from Active Docs versus outside of Active Docs, it can sometimes be helpful to run your calls through some sort of service (API Tools On Premise, Runscope, etc...) that allows you to inspect and compare your HTTP calls before sending them on to your API. This will allow you to identify any potential headers and/or parameters in the request that could be causing issues on your side.

Appendix

Logging in Nginx

For a comprehensive guide on this, check out the Nginx Logging and Monitoring docs.

Enabling debugging log

To find out more about this we encourage you to check out the nginx debugging log documentation.

You can double check the error codes that are returned by the 3scale Service Management endpoints by taking a look at our Service Management API Active Docs

However, here is a list of the most important codes returned by 3scale, and the conditions under which they would be returned

  • 400: bad request, this can be because of
    • invalid encoding
    • payload too large
  • 403:
    • credentials are not valid
    • Sending body data to 3scale for a GET request
  • 404: non-existing entity referenced e.g applications, metrics, etc...
  • 409: usage limits exceeded
  • 422: missing required parameters
  • 428: Too-Many-Requests - sent when you exceed 2000 reqs/s

Most of these error responses will also contain an XML body with a machine readable error category as well as a human readable explanation.

Please note that, if using the standard API Gateway configuration, any non 200 return code from 3scale will result in a 403 being returned to the client. As such, you will need to double check the real code returned by 3scale in your API Gateway logs.