Don’t Miss it: Top Strategies for Driving Diversity

days / hours / minutes / seconds

Register Here »
NetSuite logo
NETSUITE RESOURCES
workplace by meta logo
FUTURE
OF WORK WORKPLACE BY META
University icon
CUSTOMER EDUCATION blog
Atlassian Logo
Adoption blog
Guides
April 13, 2015
|
5 min
reading time

Setting Up Nginx as a Proxy Cache for JIRA

Share this article
/*SoMe buttons*/ <-- Facebook Share Button Code --> <-- Facebook Share Button Code --> <-- Twitter Share Button Code --> <-- Twitter Share Button Code --> <-- Linkedin Share Button Code --> <-- Linkedin Share Button Code -->

Considering Nginx as a Proxy Server for JIRA? 3 Steps to Make it Happen

Recently a user posted the following question on Atlassian Answers:

Does anyone have experience with *succesfully* running JIRA behind an Nginx reverse proxy, using Nginx's proxy_cache? This should provide at least a moderate boost in performance if configured correctly.

Ok, let's dig in.

When an application like JIRA receives a request from a user it does not have a lot of options regarding how it responds. The user has requested a specific page, or information about a JIRA issue, so the application looks up the database, forms its response, and sends it away. Sometimes the request is for a static file – an image, javascript, or css – and no database lookup is required.

As the use of the application scales, this method of dealing with requests becomes increasingly inefficient.

One option to drastically reduce the load on the application server is to place a proxying cache server in front of it. This proxy server can serve multiple purposes, such as terminating ssl connections and managing a large number of concurrent inactive sessions, but today we are looking at its use as a proxy cache.

The Actors

There are a few actors present in this play, so let's give them names.

First we have the Origin Server.

The Origin Server generates the content requested by users. They have two responsibilities:

  • Serve application content
  • Decide how that content should be cached, via the HTTP cache headers
  • This point is important and we will come back to it later.

Next is the Cache Server.

The Cache Server (also known as a Caching Proxy Server) receives the the initial HTTP request from a client. It will either serve a previously cached response or will proxy the request to the Origin Server.

If the request is proxied to the Origin Server, the Original Server's response headers are read by the Cache Server to determine if the response should be cached.

Responsibilities of the Cache Server:

  • Determine if the client's HTTP request will accept a cached response, and if there's an item in the cache to respond with
  • Proxy requests to the Origin Server if not serving a cached response, and cache the response if appropriate
  • Respond to the client with either the cached or proxied response

Last we have the Client.

Clients will typically have their own cache, such as the cache built in to every browser. If the browser has cached a response it will not send any request to the Server, but will use its own copy of the response. The client cache relies on directives from the Server to determine if it can cache files or not.

A client which implements a local cache has the following responsibilities:

  • Sending requests
  • Caching responses
  • Deciding to pull requests from local cache or making HTTP request to retrieve them

Nginx as a Cache Server for JIRA

The configuration to set up Nginx as a Proxy Server for JIRA is quite simple, and adding caching is not much harder.

The first thing to do is to set up a proxy cache, by adding a proxy_cache_path directive to the http block:

http block

proxy_cache_path /var/run/nginx-cache levels=1:2 keys_zone=nginx-cache:10m max_size=500m;

/var/run/nginx-cacheThe path on disk where the cache will be storedlevels=1:2The structure of the cache. In this case the cache will have two levels like: /var/run/nginx-cache/tmp/c/29/b7f54b2df7773722d382f4809d65029ckeys_zone=nginx-cache:10mThe name of the caches keys_zone and how large it will be. Here the keys file will occupy a maximum of 10m and will be called nginx-cachemax_size=500mThe maximum size of the cache, here 500 Megabytes

Next, add proxy_cache parameters for specific servers or locations that should be cached.

location /

#...

proxy_cache nginx-cache

proxy_cache_valid 1440m

proxy_cache_min_uses 1

add_header X-Proxy-Cache $upstream_cache_status;

}

proxy_cache nginx-cache;Use the nginx-cache cacheproxy_cache_valid 1440m;Cache any response with code 200, 301, or 302 for the next 24 hoursproxy_cache_min_uses 1;Each response will be cached after being request 1 timeadd_header X-Proxy-Cache$upstream_cache_status;Add a header X-Proxy-Cache to the response with a value of the cache status such as HIT, MISS, or BYPASS. Useful for debugging caching.

Now we have enabled caching, and we have reduced the number of requests that will be sent through to JIRA to handle directly.

There is a catch, however!

As the use of JIRA scales, and we see other applications interfacing with it such as Confluence, it is common to notice that a large percentage of requests are made for 'static' resources. This includes the CSS and JavaScript you would expect, but you also see lots of other duplicate requests that are each served the same response. Over time these responses do change, but for the vast majority of requests are identical.

An example of this kind of request is the gadget feed – rest/gadgets/1.0/g/feed. This is an xml listing of every JIRA gadget that is currently enabled. Unless a new plugin is installed, or certain gadgets are disabled or enabled, this list does not change. This resource is however one of the most requested resources in large instances, and can take a significant amount of time to generate.

JIRA, however, does not think this should be cached! A header will be added to responses generated by JIRA:

Cache-Control: no-cache, no-store, no-transform

As you will recall, the Caching Proxy Server will, by default, respect the wishes of the Origin Server and refuse to cache these responses.

In order to override this behaviour, and cache these responses anyhow, we can set one further directive.

proxy_ignore_headers Cache-Control;

Our final server or location set of caching directives will now look something like the following, with various locations used to tune the cache parameters we require:

server {

location / {

#...

proxy_cache nginx-cache;

proxy_cache_valid 1440m;

proxy_cache_min_uses 1;

add_header X-Proxy-Cache $upstream_cache_status;

}

location ~*/(feed)$ {

#... identical to 'location / {...}'except for

proxy_ignore_headers Cache-Control;

}

}

We leverage best practices to ensure you can take advantage of the knowledge gained from thousands of customer engagements.

LEARN MOREServiceRocket Backed - We've got your back badge