Red Hat And 3scale
The tutorial is based on a collaboration between Red Hat and 3scale to provide a full-stack API solution. This solution includes design, development, and hosting of your API on the Red Hat JBoss xPaaS for OpenShift, combined with the 3scale API Management Platform for full control, visibility, and monetization features.
The API itself can be deployed on Red Hat JBoss xPaaS for OpenShift, which can be hosted in the cloud as well as on premise (that's the Red Hat part). The API management (the 3scale part) can be hosted on Amazon Web Services (AWS), using 3scale APIcast or OpenShift. This gives a wide range of different configuration options for maximum deployment flexibility.
The diagram below summarizes the main elements of this joint solution. It shows the whole integration chain including enterprise backend systems, middleware, API management, and API customers.
For specific support questions, please contact us.
This tutorial shows three different deployment scenarios step by step:
- Scenario 1 – A Fuse on OpenShift application containing the API. The API is managed by 3scale with the API gateway hosted on Amazon Web Services (AWS) using the 3scale AMI.
- Scenario 2 – A Fuse on OpenShift application containing the API. The API is managed by 3scale with the API gateway hosted on APIcast (3scale's cloud hosted API gateway).
- Scenario 3 – A Fuse on OpenShift application containing the API. The API is managed by 3scale with the API gateway hosted on OpenShift
This tutorial is split into four parts:
- Part 1: Fuse on OpenShift setup to design and implement the API
- Part 2: Configuration of 3scale API Management
- Part 3a: Scenario 1 – API gateway (nginx) hosted on Amazon Web Services (AWS)
- Part 3b: Scenario 2 – API gateway hosted using APIcast
- Part 3c: Scenario 3 – API gateway hosted using OpenShift
- Part 4: Testing the API and API management
The diagram below shows the roles the various parts play in this configuration.
You will create a Fuse on OpenShift application that contains the API to be managed. You will use the REST quickstart that is included with Fuse 6.1. This requires a medium or large gear, as using the small gear will result in memory errors and/or horrible performance.
Sign in to your OpenShift online account. Sign up for an OpenShift online account if you don't already have one.
Click the "add application" button after signing in.
Under xPaaS, select the Fuse type for the application.
Now configure the application. Enter the subdomain you'd like your application to show up under, such as "restapitest". This will give a full URL of the form "appname-domain.rhcloud.com" – in the example below "restapitest-ossmentor.rhcloud.com". Change the gear size to medium or large, which is required for the Fuse cartridge. Now click on "create application".
Click "create application".
Browse the application hawtio console and sign in.
After signing in, click on the "runtime" tab and the container and ddd the REST API example.
Click on the "add a profile" button.
Scroll down to examples/quickstarts and click the "REST" checkbox, then "add". The REST profile should show up on the container associated profile page.
Click on the runtime/APIs tab to verify the REST API profile.
Verify the REST API is working. Browse to customer 123, which will return the ID and name in XML format.
To protect the API that you just created in Part 1 using 3scale API Management, you first must conduct the according configuration, which is then later deployed according to one of the three scenarios presented.
Once you have your API set up on OpenShift, you can start setting it up on 3scale to provide the management layer for access control and usage monitoring.
Log in to your 3scale account. You can sign up for a 3scale account at www.3scale.net if you don't already have one. When you log in to your account for the first time, follow the wizard to learn the basics about integrating your API with 3scale.
In API > Integration, you can enter the public URL for the Fuse application on OpenShift that you just created, e.g. "restapitest-ossmentor.rhcloud.com" and click on Test. This will test your setup against the 3scale API Gateway in the staging environment. The staging API gateway allows you to test your 3scale setup before deploying your proxy configuration to AWS.
The next step is to set up the API methods that you want to monitor and rate limit. To do that go to API > Definition and click on 'New method'.
For more details on creating methods, visit our API definition tutorial.
Once you have all of the methods that you want to monitor and control set up under the application plan, you'll need to map these to actual HTTP methods on endpoints of your API. Go back to the integration page and expand the "mapping rules" section.
Create mapping rules for each of the methods you created under the application plan.
Once you have done that, your mapping rules will look something like this:
For more details on mapping rules, visit our tutorial about mapping rules.
Once you've clicked "update and test" to save and test your configuration, you are ready to download the set of configuration files that will allow you to configure your API gateway on AWS. For the API gateway, you should use a high-performance, open-source proxy called nginx. You will find the necessary configuration files for nginx on the same integration page by scrolling down to the "production" section.
The next section will now take you through various hosting scenarios.
This section describes scenario 1, where the API gateway is hosted on Amazon Web Services (AWS) using the 3scale AMI.
You should have already completed these steps:
- You have an Amazon Web Services (AWS) account.
- You've created an application and are ready to deploy it to AWS.
- You've created your 3scale API Management configuration.
With this accomplished, you're ready to deploy the API gateway to AWS.
Step 1: Open your EC2 management console
In the left-hand sidebar you will see "AWS Marketplace". Select this and type 3scale into the search, and you will see the 3scale Proxy AMI (Amazon Machine Image) show up in the results. The 3scale Proxy AMI implicitly uses and runs an nginx gateway.
Select the plan that's most appropriate to your application. Then you can either select "review and launch" if you want a simple launch with 3scale or "next: configure instance details" to add additional detail configuration such as shutdown, storage, and security.
Now click "launch". The next screen will ask you to create or select an existing public private key.
If you already have a public-private key created on AWS, you can choose to use it. If you do not already have a public-private key pair, you should choose to create a new pair.
Your 3scale proxy is now running on AWS. In the next section, see how your API and API management can be tested.
This section describes scenario 2, where the API gateway is hosted on APIcast (3scale's cloud hosted API gateway).
Once you're done with step 5 of part 2 in which you configured 3scale API Management and saved and tested your configuration, you are ready to deploy these settings to the production 3scale hosted API Gateway, APIcast.
You will find this on the same "integration" page by scrolling down to the "production" section. There you simply click "deploy".
The deployment will take a few minutes, but once it's fully deployed, the Fuse application will be accessed through the APIcast gateway (the public base URL) hosted on APIcast.io. This provides full access control and usage monitoring of your API.
This section covers scenario 3, where you install the 3scale API Management configuration files on your nginx instance on OpenShift.
You should have already completed these steps:
- You have your OpenShift account.
- You have created your application and are ready to deploy it to OpenShift.
- You have created your proxy on 3scale.
With that accomplished, you are ready to set up your OpenShift application and deploy your configuration.
Create an application with the DIY cartridge, either with the client tools (RHC) or through the console.
Stop the OpenShift application so you don't get port binding errors.
rhc app stop diytestnginix --namespace ossmentor
Use SSH to get the OpenShift shell.
Set up a PATH variable for ldconfig, or you will get the "PATH env when enabling luajit" error:
Install the PCRE module.
cd $OPENSHIFT_TMP_DIR wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.36.tar.bz2 tar jxf pcre-8.36.tar.bz2
Install and build the nginx-openresty package.
wget http://openresty.org/download/ngx_openresty-126.96.36.199.tar.gz tar xzvf ngx_openresty-VERSION.tar.gz cd ngx_openresty-188.8.131.52 ./configure --prefix=$OPENSHIFT_DATA_DIR --with-pcre=$OPENSHIFT_TMP_DIR/pcre-8.36 --with-pcre-jit --with-ipv6 --with-http_iconv_module -j2 Run gmake Run gmake install
Navigate to the nginx conf directory.
Call the 3scale API to download the configuration files from your 3scale instance using cURL.
curl -v -X GET "https://YOURDOMAIN-admin.3scale.net/admin/api/nginx.zip?provider_key=YOUR_PROVIDER_KEY" > nginx.zip
Extract the files you just downloaded to the current directory:
Rename and update the nginx.conf file.
- Use the mv command to change the nginx config to nginx.conf
- Run env to get OPENSHIFTDIYIP and OPENSHIFTDIYPORT
- Add the following lines to use the OpenShift data directory environment variable in your nginx.conf
- At the start of the file:
At the start of the server block:
set_by_lua $openshift_data_dir 'return os.getenv("OPENSHIFT_DATA_DIR")';
Change the server, IP, and port:
listen 127.13.112.1:8080; CHANGE YOUR SERVERNAME TO YOUR CUSTOM DOMAIN OR LEAVE IT BLANK IF ONLY HAVE ONE #servername diytestnginix-ossmentor.rhcloud.com;
Change the Lua file name:
CHANGE THE PATH TO POINT TO THE RIGHT FILE ON YOUR FILESYSTEM IF NEEDED accessbyluafile $openshiftdatadir/nginx/conf/nginx2445581129832.lua;
Start nginx from $OPENSHIFTDATADIR/nginx/sbin
./nginx /var/lib/openshift/54c6763fe0b8cd8484000020/app-root/data/nginx/sbin/nginx -p $OPENSHIFT_DATA_DIR/nginx/ -c $OPENSHIFT_DATA_DIR/nginx/conf/nginx.conf
If you need to stop nginx, use
./nginx -s stop
Testing the correct functioning of the API and the API Management is independent from the chosen scenario. You can use your favorite REST client and run the following commands.
Retrieve the customer instance with id 123.
Create a customer.
Update the customer instance with id 123.
Delete the customer instance with id 123.
Check the API Management analytics of your API.
If you now log back in to your 3scale account and go to Monitoring > Usage, you can see the various hits of the API endpoints represented as graphs.
This is just one element of API Management that brings you full visibility and control over your API. Other features include:
- Access control
- Usage policies and rate limits
- API documentation and developer portals
- Monetization and billing
For more details about the specific API Management features and their benefits, please refer to the 3scale API Management Platform product description.
For more details about the specific Red Hat JBoss Fuse product features and their benefits, please refer to the JBOSS FUSE Overview.
For more details about running Red Hat JBoss Fuse on OpenShift, please refer to the Getting Started with JBoss Fuse on OpenShift.