Understanding IIS and Tomcat on Windows
IIS, short for Internet Information Services, is Microsoft’s flagship web server that ships with every Windows Server release. It is tightly integrated into the Windows ecosystem, taking advantage of the operating system’s security model, event logging, and network stack. Because IIS runs as a native Windows process, it can deliver static files, handle HTTP requests, and negotiate SSL handshakes with minimal overhead.
Tomcat, on the other hand, is an open‑source servlet container built on the Java platform. It implements the Java Servlet and JavaServer Pages (JSP) specifications, making it the default choice for Java web applications worldwide. Tomcat runs on the Java Virtual Machine (JVM), which adds a translation layer between Java bytecode and the host machine’s CPU instructions. While this abstraction provides portability, it also introduces latency that native Windows code does not.
When deploying a web application stack on a Windows Server, many administrators default to IIS for the front‑end traffic and leave Tomcat behind to handle dynamic Java content. The reasoning is straightforward: IIS is easier to configure through the GUI, it offers robust support for Windows authentication, and it can serve static files faster than a Java container. Tomcat’s default installation listens on port 8080, which is chosen to avoid conflict with IIS’s default port 80. The two servers can coexist because they bind to different ports, but this separation brings its own set of challenges.
Understanding these foundational differences is essential before attempting to merge the strengths of each platform. The goal is to keep IIS as the traffic broker and front‑door while delegating the execution of servlets and JSPs to Tomcat, creating a hybrid environment that plays to the advantages of both technologies.
Common Port Conflict and Hard‑coded URLs
In the simplest deployment model, IIS handles all traffic on port 80, and Tomcat listens on port 8080. This arrangement forces developers to embed the port number in every URL that points to a Java resource: http://myserver:8080/app/index.jsp. Hard‑coding a port number in application code is a classic anti‑pattern for several reasons.
First, the hard‑coded value ties the application to a particular environment. When the infrastructure changes - say, the server moves to a load balancer that terminates SSL on port 443 - the application must be updated everywhere it references the old port. Second, if an administrator forgets to include the port number in a URL, the request hits IIS and, because the default web root does not contain the Java application, the user receives a 404 error or, in worst cases, the JSP source is downloaded as a file. Third, when multiple Java applications share the same server, each one may end up on a different port, leading to a confusing web of URLs and a fragmented user experience.
From a design perspective, embedding the port number in the code violates the principle of separation of concerns. The URL generation logic should be abstracted away from the deployment details. Instead, the application should rely on relative URLs or on a single front‑end host that forwards requests to the appropriate back‑end service.
In practice, avoiding hard‑coded ports requires an intermediary layer that can accept requests on the standard HTTP/HTTPS ports and route them internally to Tomcat. This layer can be implemented with an IIS ISAPI filter, a reverse proxy such as IIS URL Rewrite Module, or even an external tool like nginx. By doing so, developers can continue writing URLs without worrying about the underlying port numbers.
Performance Gap: Native Code vs. JVM
IIS’s native code base gives it a performance advantage over Tomcat when it comes to static content delivery. Serving an image, a CSS file, or a simple HTML page requires only a handful of system calls. IIS leverages the Windows kernel’s HTTP protocol stack, HTTP keep‑alive, and content caching to minimize latency and CPU usage.
Tomcat, conversely, runs on the JVM. Even a single request for a static file passes through Java code that parses the request, performs access checks, and then hands off the file to the OS for delivery. The JVM adds a bytecode-to-native translation layer that consumes CPU cycles and increases memory usage. While modern servers with many cores can mitigate this overhead, the difference remains noticeable for high‑throughput scenarios.
When a web application mixes static and dynamic content - an almost universal case - placing static assets under IIS can free up Tomcat to focus on CPU‑intensive servlet processing. IIS also offers built‑in HTTP compression, which reduces the bandwidth required to transfer assets over the network. Tomcat can perform compression, but it typically requires additional configuration or third‑party libraries, and the compression happens in Java, further adding to the overhead.
Another performance consideration is the connection handling model. IIS uses an asynchronous I/O model that can manage thousands of concurrent connections with a small thread pool. Tomcat uses a thread-per-connection model (unless you enable NIO connectors), which can lead to thread exhaustion under heavy load. By letting IIS manage connections and delegating only the dynamic requests to Tomcat, you can achieve a more scalable architecture.
SSL, HTTPS and the Overhead of Java
Setting up HTTPS in a Windows environment is straightforward with IIS. The IIS Manager provides a wizard to import certificates, configure SNI, and enable TLS protocols. The underlying SSPI (Security Support Provider Interface) handles the cryptographic operations in native code, keeping the performance impact low.
Tomcat’s SSL configuration is more involved. You need to generate a keystore, import the certificate chain, and edit the server.xml file to reference the keystore. The configuration file is in XML, not a GUI, which can lead to errors if the paths or passwords are mistyped. Even after a successful configuration, every HTTPS request that goes through Tomcat requires the JVM to perform the SSL handshake, which is computationally heavier than native Windows implementations.
In a scenario where both IIS and Tomcat handle HTTPS traffic, the server load can spike dramatically. Each HTTPS request that bypasses IIS’s fast path ends up being processed by Tomcat’s JVM, potentially exhausting CPU resources. The result is increased latency and, in extreme cases, server crashes under high traffic volumes.
By routing all HTTPS traffic through IIS and using an ISAPI filter or URL rewrite to forward only the dynamic requests to Tomcat, you keep the cryptographic work in the native layer. Tomcat will only see the already decrypted traffic, which is a significant performance win.
Authentication, NTLM and Directory Integration
Windows environments often rely on NTLM or Kerberos authentication for web applications. IIS natively supports these protocols and can integrate seamlessly with Active Directory. It can also enforce access control at the IIS level, returning a 401 status for unauthorized users before the request even reaches the back‑end.
Replicating this behavior in Tomcat is laborious. You would need to program NTLM support into the Java application or use a third‑party library. Moreover, the library must be compatible with all browsers and operating systems, adding maintenance overhead. Even if you succeed, you still need to manage user credentials and roles within your Java application, which can duplicate the logic already handled by IIS.
Delegating authentication to IIS eliminates the need to implement NTLM in Java. IIS authenticates the user first, then forwards the request to Tomcat. If you need to propagate the authenticated identity to the Java layer, you can rely on HTTP headers such as X-Authenticated-User or use the Windows identity as part of the servlet context via the integrated pipeline. This approach keeps authentication logic in a single place, reduces code duplication, and leverages the robust security features built into Windows.
Resource Management: Processor and Bandwidth Control
One of IIS’s most valuable features for multi‑tenant hosting is the ability to limit processor usage per site. Using the “Processor Throttling” setting, administrators can cap the percentage of CPU that a particular web application consumes. Similarly, the “Bandwidth Throttling” option restricts the maximum data rate per site, preventing a single application from saturating the outbound network.
Tomcat does not offer a comparable per‑application resource limiter. If one application receives a surge of traffic, it can consume all available CPU or memory, starving other applications on the same server. This lack of isolation can lead to unpredictable performance or even crashes.
By assigning static content delivery and traffic shaping responsibilities to IIS, you can keep the Java application layer focused on business logic. IIS’s built‑in throttling ensures that heavy traffic to one site does not jeopardize the stability of other sites. This separation is especially valuable in shared hosting scenarios or when running multiple microservices on a single physical host.
Delegating Responsibilities: A Pragmatic Approach
The ideal architecture uses IIS as the front‑end traffic manager and Tomcat as the servlet engine. IIS handles the following:
• Serving static files (HTML, CSS, JavaScript, images) with HTTP keep‑alive and compression.
• Managing HTTPS handshakes and SSL termination.
• Enforcing NTLM/Kerberos authentication and providing per‑site bandwidth and CPU limits.
• Acting as a reverse proxy that forwards requests for .jsp, .do, or any other Java servlet path to Tomcat.
Tomcat’s responsibilities are narrowed to:
• Executing servlets and JSPs.
• Providing JNDI data sources, JMS, and other Java EE services.
• Returning dynamic responses back to IIS via the proxy layer.
With this delegation, the configuration changes are minimal: you install IIS and Tomcat, keep Tomcat listening on 8080, and install a lightweight ISAPI filter or rewrite rule that routes dynamic URLs to Tomcat. The rest of the infrastructure can remain untouched, and you avoid the pitfalls of port conflicts, hard‑coded URLs, and performance bottlenecks.
Step‑by‑Step Integration for a Single Host
Assume you have a single Windows Server that hosts a web application consisting of both static pages and JSPs. Follow these steps to set up the hybrid stack.
1. Install IIS (if not already present) and create a new website pointing to the root of your C:\inetpub\wwwroot\myapp folder. The folder should contain both static files and the webapps sub‑directory where Tomcat will deploy its webapps.
2. Install Tomcat and keep the default connector listening on port 8080. Deploy your .war files under C:\Program Files\Apache Software Foundation\Tomcat\7.0\webapps. Verify that Tomcat starts correctly by browsing http://localhost:8080/yourapp from the same server.
3. Download the JspISAPI filter, which is an IIS extension that forwards JSP and servlet requests to Tomcat. The filter can be found at https://www.jspisapi.com/download. Install the filter and register it as an ISAPI filter in the IIS Manager under ISAPI Filters for your website.
4. Configure the filter to proxy URLs that match a pattern, such as / for JSP or /.do for servlets. The filter reads its configuration from jspisapi.conf, where you specify the Tomcat host (usually localhost), port (8080), and context root. Example snippet:
5. Ensure that the filter is placed after the default handler for static files in the IIS request pipeline. This guarantees that static assets are served directly by IIS, while any request ending with .jsp or matching your proxy rule is handed off to Tomcat.
6. Test the setup by browsing http://yourserver/myapp/index.jsp. IIS should serve the JSP via the filter, and the resulting HTML should be sent back to the client. Static assets referenced within the JSP, like style.css or logo.png, will be served by IIS itself.
7. Enable SSL in IIS by importing a certificate and binding HTTPS to the site. Since all requests now pass through IIS first, the TLS handshake is handled natively. The filter will forward the decrypted request to Tomcat over HTTP, avoiding the need to run SSL in the JVM.
8. Optional: enable HTTP compression and caching for static files in IIS. In the Compression feature, check the boxes for Static and Dynamic to ensure that even dynamic responses from Tomcat benefit from IIS’s gzip handling.
9. Finally, configure IIS’s CPU and bandwidth throttling under Advanced Settings if you anticipate high traffic. This ensures that the Java application cannot starve other sites on the same server.
Extending the Setup to Multiple Virtual Hosts
In a hosting scenario, you may have several distinct web applications, each with its own domain name. The same principles apply, but the configuration must account for each virtual host.
1. Create a separate IIS website for each domain, e.g., siteA and siteB. Point each site’s physical path to its own C:\inetpub\wwwroot\siteA or C:\inetpub\wwwroot\siteB folder.
2. For each site, create a virtual directory that maps to Tomcat’s webapps root. For example, under siteA add a virtual directory called 3. Install the JspISAPI filter once, but register it separately for each site. Configure each filter instance to forward requests to the appropriate Tomcat context. The 4. Ensure that the proxy rules for each site are isolated. If you use URL Rewrite, create a rule set per site that only matches its own domain or sub‑path. This prevents cross‑site leaking of URLs. 5. Test each site individually, verifying that static files are served by IIS and dynamic requests are forwarded to Tomcat. Check the logs on both IIS and Tomcat to confirm the routing path. 6. Apply SSL to each site using separate certificates if needed. IIS will handle the TLS handshake for each domain; Tomcat will receive plain HTTP traffic after the filter. 7. If you want per‑site resource limits, configure IIS throttling settings for each website independently. Tomcat remains oblivious to these limits, but the front‑end will enforce them. By adopting IIS as the entry point and delegating servlet execution to Tomcat, you unlock a set of benefits that address the challenges highlighted earlier. • Cleaner URLs: Users no longer see the • Faster static content: IIS delivers images, CSS, and JavaScript with keep‑alive, compression, and caching, saving CPU cycles for Tomcat. • Robust authentication: NTLM/Kerberos and Active Directory integration stay within IIS, eliminating the need to code authentication in Java. • Simplified SSL: The TLS termination is handled by IIS, so you avoid the overhead of Java‑based SSL and reduce the attack surface. • Better resource isolation: IIS’s per‑site throttling prevents a single application from monopolizing CPU or bandwidth, protecting other sites on the server. • Scalable architecture: Each layer can scale independently. If static traffic spikes, IIS can scale horizontally without touching Tomcat; if Java load increases, Tomcat can be upgraded or moved to a separate node. • Reduced maintenance: The deployment of Java applications remains straightforward - pack your • Cost‑effective: Leveraging the built‑in features of IIS eliminates the need for additional reverse‑proxy software, keeping licensing and operational costs low. In summary, this hybrid strategy preserves the strengths of both servers while mitigating their weaknesses. It results in a cleaner codebase, better performance, stronger security, and easier management - an approach that many Windows‑based web operations adopt for production workloads.tomcat pointing to C:\Program Files\Apache Software Foundation\Tomcat\7.0\webapps\siteA. Repeat for siteB
jspisapi.conf for siteA will contain tomcat.context=/siteA, while siteB’s configuration will point to /siteB
Resulting Advantages of the Combined Setup
:8080 port number in the address bar. The proxy layer maps friendly URLs to the underlying Tomcat services..war files and drop them into Tomcat’s webapps folder. No changes to the IIS configuration are required for most deployments.





No comments yet. Be the first to comment!