在京东工作的这一年多时间里,我在整个商品详情页系统(后端数据源)及商品详情页统一服务系统(页面中异步加载的很多服务,如库存服务、图书相关服务、延保服务等)中使用了Servlet3请求异步化模型,总结了Servlet3请求异步化的一些经验和想法跟大家分享和交流。
我将从如下几点阐述Servlet3异步化之后的好处:
1、为什么实现请求异步化需要使用Servlet3
2、请求异步化后得到的好处是什么
3、如何使用Servlet3异步化
4、一些Servlet3异步化压测数据
首先说下一般http请求处理的流程:
1、容器负责接收并解析请求为HttpServletRequest;
2、然后交给Servlet进行业务处理;
3、最后通过HttpServletResponse写出响应。
在Servlet2.x规范中,所有这些处理都是同步进行的,也就是说必须在一个线程中完成从接收请求、业务处理到响应。
1、为什么实现请求异步化需要使用Servlet3
此处已Tomcat6举例子,Tomcat6没有实现Servlet3规范,它在处理请求时是通过如下方式实现的:
org.apache.catalina.connector.CoyoteAdapter#service
- // Recycle the wrapper request and response
- if (!comet) {
- request.recycle();
- response.recycle();
- } else {
- // Clear converters so that the minimum amount of memory
- // is used by this processor
- request.clearEncoders();
- response.clearEncoders();
- }
// Recycle the wrapper request and response if (!comet) { request.recycle(); response.recycle(); } else { // Clear converters so that the minimum amount of memory // is used by this processor request.clearEncoders(); response.clearEncoders(); }
在请求结束时会同步进行请求的回收,也就是说请求解析、业务处理和响应必须在一个线程内完成,不能跨越线程界限。
这也就说明了必须使用实现了Servlet规范的容器进行处理,如Tomcat 7.x。
2、请求异步化后得到的好处是什么
2.1、更高的并发能力;
2.2、请求解析和业务处理线程池分离;
2.3、根据业务重要性对业务分级,并分级线程池;
2.4、对业务线程池进行监控、运维、降级等处理。
2.1、更高的并发能力
得益于技术的升级,在JDK7配合Tomcat7压测中获得了不错的性能表现。
2.2、请求解析和业务处理线程池分离
在引入Servlet3之前我们的线程模型是如下样子的:
整个请求解析、业务处理、生成响应都是由Tomcat线程池进行处理,而且都是在一个线程中处理;不能分离线程处理;比如接收到请求后交给其他线程处理,这样不能灵活定义业务处理模型。
引入Servlet3之后,我们的线程模型可以改造为如下样子:
此处可以看到请求解析使用Tomcat单线程;而解析完成后会扔到业务队列中,由业务线程池进行处理;这种处理方式可以得到如下好处:
1、根据业务重要性对业务进行分级,然后根据分级定义线程池;
2、可以拿到业务线程池,可以进行很多的操作,比如监控、降级等。
2.3、根据业务重要性对业务分级,并分级线程池
在一个系统的发展期间,我们一般把很多服务放到一个系统中进行处理,比如库存服务、图书相关服务、延保服务等等;这些服务中我们可以根据其重要性对业务分级别和做一些限制:
1、可以把业务分为核心业务级别和非核心业务级别;
2、为不同级别的业务定义不同的线程池,线程池之间是隔离的;
3、根据业务量定义各级别线程池大小。
此时假设非核心业务因为数据库连接池或者网络问题抖动造成响应时间慢,不会对我们核心业务产生影响。
2.4、对业务线程池进行监控、运维、降级等处理
因为业务线程池从Tomcat中分离出来,可以进行线程池的监控,比如查看当前处理的请求有多少,是否到了负载瓶颈,到了瓶颈后可以进行业务报警等处理。
上图是我们一个简陋的监控图,可实时查看到当前处理情况:正在处理的任务有多少,队列中等待的任务有多少;可以根据这些数据进行监控和预警。
另外我们还可以进行一些简单的运维:
对业务线程池进行扩容,或者业务出问题时立即清空线程池防止容器崩溃等问题;而不需要等待容器重启(容器重启需要耗费数十秒甚至数几十毫秒、而且启动后会有预热问题,而造成业务产生抖动)。
如果发现请求处理不过来,负载比较高,最简单的办法就是直接清空线程池,将老请求拒绝掉,而没有雪崩效应。
因为业务队列和业务线程池都是自己的,可以对这些基础组件做很多处理,比如定制业务队列,按照用户级别对用户请求排序,高级别用户得到更高优先级的业务处理。
3、如何使用Servlet3异步化
对于Servlet3的使用,可以参考我之前的博客:
而在我项目里使用就比较简单:
1、接收请求
- @RequestMapping("/book")
- public void getBook(HttpServletRequest request, @RequestParam(value="skuId") final Long skuId,
- @RequestParam(value="cat1") final Integer cat1, @RequestParam(value="cat2") final Integer cat2) throws Exception {
- oneLevelAsyncContext.submitFuture(request, () -> bookService.getBook(skuId, cat1, cat2));
- }
@RequestMapping("/book") public void getBook(HttpServletRequest request, @RequestParam(value="skuId") final Long skuId, @RequestParam(value="cat1") final Integer cat1, @RequestParam(value="cat2") final Integer cat2) throws Exception { oneLevelAsyncContext.submitFuture(request, () -> bookService.getBook(skuId, cat1, cat2)); }
通过一级业务线程池接收请求,并提交业务处理到该线程池;
2、业务线程池封装
- public void submitFuture(final HttpServletRequest req, final Callable<Object> task) {
- final String uri = req.getRequestURI();
- final Map<String, String[]> params = req.getParameterMap();
- final AsyncContext asyncContext = req.startAsync(); //开启异步上下文
- asyncContext.getRequest().setAttribute("uri", uri);
- asyncContext.getRequest().setAttribute("params", params);
- asyncContext.setTimeout(asyncTimeoutInSeconds * 1000);
- if(asyncListener != null) {
- asyncContext.addListener(asyncListener);
- }
- executor.submit(new CanceledCallable(asyncContext) { //提交任务给业务线程池
- @Override
- public Object call() throws Exception {
- Object o = task.call(); //业务处理调用
- if(o == null) {
- callBack(asyncContext, o, uri, params); //业务完成后,响应处理
- }
- if(o instanceof CompletableFuture) {
- CompletableFuture<Object> future = (CompletableFuture<Object>)o;
- future.thenAccept(resultObject -> callBack(asyncContext, resultObject, uri, params))
- .exceptionally(e -> {
- callBack(asyncContext, "", uri, params);
- return null;
- });
- } else if(o instanceof String) {
- callBack(asyncContext, o, uri, params);
- }
- return null;
- }
- });
- }
public void submitFuture(final HttpServletRequest req, final Callable
- private void callBack(AsyncContext asyncContext, Object result, String uri, Map<String, String[]> params) {
- HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse();
- try {
- if(result instanceof String) {
- write(resp, (String)result);
- } else {
- write(resp, JSONUtils.toJSON(result));
- }
- } catch (Throwable e) {
- resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR); //程序内部错误
- try {
- LOG.error("get info error, uri : {}, params : {}", uri, JSONUtils.toJSON(params), e);
- } catch (Exception ex) {
- }
- } finally {
- asyncContext.complete();
- }
- }
private void callBack(AsyncContext asyncContext, Object result, String uri, Mapparams) { HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse(); try { if(result instanceof String) { write(resp, (String)result); } else { write(resp, JSONUtils.toJSON(result)); } } catch (Throwable e) { resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR); //程序内部错误 try { LOG.error("get info error, uri : {}, params : {}", uri, JSONUtils.toJSON(params), e); } catch (Exception ex) { } } finally { asyncContext.complete(); } }
线程池的初始化
- @Override
- public void afterPropertiesSet() throws Exception {
- String[] poolSizes = poolSize.split("-");
- //初始线程池大小
- int corePoolSize = Integer.valueOf(poolSizes[0]);
- //最大线程池大小
- int maximumPoolSize = Integer.valueOf(poolSizes[1]);
- queue = new LinkedBlockingDeque<Runnable>(queueCapacity);
- executor = new ThreadPoolExecutor(
- corePoolSize, maximumPoolSize,
- keepAliveTimeInSeconds, TimeUnit.SECONDS,
- queue);
- executor.allowCoreThreadTimeOut(true);
- executor.setRejectedExecutionHandler(new RejectedExecutionHandler() {
- @Override
- public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
- if(r instanceof CanceledCallable) {
- CanceledCallable cc = ((CanceledCallable) r);
- AsyncContext asyncContext = cc.asyncContext;
- if(asyncContext != null) {
- try {
- String uri = (String) asyncContext.getRequest().getAttribute("uri");
- Map params = (Map) asyncContext.getRequest().getAttribute("params");
- LOG.error("async request rejected, uri : {}, params : {}", uri, JSONUtils.toJSON(params));
- } catch (Exception e) {}
- try {
- HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse();
- resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
- } finally {
- asyncContext.complete();
- }
- }
- }
- }
- });
- if(asyncListener == null) {
- asyncListener = new AsyncListener() {
- @Override
- public void onComplete(AsyncEvent event) throws IOException {
- }
- @Override
- public void onTimeout(AsyncEvent event) throws IOException {
- AsyncContext asyncContext = event.getAsyncContext();
- try {
- String uri = (String) asyncContext.getRequest().getAttribute("uri");
- Map params = (Map) asyncContext.getRequest().getAttribute("params");
- LOG.error("async request timeout, uri : {}, params : {}", uri, JSONUtils.toJSON(params));
- } catch (Exception e) {}
- try {
- HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse();
- resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
- } finally {
- asyncContext.complete();
- }
- }
- @Override
- public void onError(AsyncEvent event) throws IOException {
- AsyncContext asyncContext = event.getAsyncContext();
- try {
- String uri = (String) asyncContext.getRequest().getAttribute("uri");
- Map params = (Map) asyncContext.getRequest().getAttribute("params");
- LOG.error("async request error, uri : {}, params : {}", uri, JSONUtils.toJSON(params));
- } catch (Exception e) {}
- try {
- HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse();
- resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);
- } finally {
- asyncContext.complete();
- }
- }
- @Override
- public void onStartAsync(AsyncEvent event) throws IOException {
- }
- };
- }
- }
@Override public void afterPropertiesSet() throws Exception { String[] poolSizes = poolSize.split("-"); //初始线程池大小 int corePoolSize = Integer.valueOf(poolSizes[0]); //最大线程池大小 int maximumPoolSize = Integer.valueOf(poolSizes[1]); queue = new LinkedBlockingDeque(queueCapacity); executor = new ThreadPoolExecutor( corePoolSize, maximumPoolSize, keepAliveTimeInSeconds, TimeUnit.SECONDS, queue); executor.allowCoreThreadTimeOut(true); executor.setRejectedExecutionHandler(new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) { if(r instanceof CanceledCallable) { CanceledCallable cc = ((CanceledCallable) r); AsyncContext asyncContext = cc.asyncContext; if(asyncContext != null) { try { String uri = (String) asyncContext.getRequest().getAttribute("uri"); Map params = (Map) asyncContext.getRequest().getAttribute("params"); LOG.error("async request rejected, uri : {}, params : {}", uri, JSONUtils.toJSON(params)); } catch (Exception e) {} try { HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse(); resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR); } finally { asyncContext.complete(); } } } } }); if(asyncListener == null) { asyncListener = new AsyncListener() { @Override public void onComplete(AsyncEvent event) throws IOException { } @Override public void onTimeout(AsyncEvent event) throws IOException { AsyncContext asyncContext = event.getAsyncContext(); try { String uri = (String) asyncContext.getRequest().getAttribute("uri"); Map params = (Map) asyncContext.getRequest().getAttribute("params"); LOG.error("async request timeout, uri : {}, params : {}", uri, JSONUtils.toJSON(params)); } catch (Exception e) {} try { HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse(); resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR); } finally { asyncContext.complete(); } } @Override public void onError(AsyncEvent event) throws IOException { AsyncContext asyncContext = event.getAsyncContext(); try { String uri = (String) asyncContext.getRequest().getAttribute("uri"); Map params = (Map) asyncContext.getRequest().getAttribute("params"); LOG.error("async request error, uri : {}, params : {}", uri, JSONUtils.toJSON(params)); } catch (Exception e) {} try { HttpServletResponse resp = (HttpServletResponse) asyncContext.getResponse(); resp.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR); } finally { asyncContext.complete(); } } @Override public void onStartAsync(AsyncEvent event) throws IOException { } }; } }
3、业务处理
执行bookService.getBook(skuId, cat1, cat2)进行业务处理。
4、返回响应
在之前封装的异步线程池上下文中直接返回。
5、Tomcat server.xml的配置
- <Connector port="1601" asyncTimeout="10000" acceptCount="10240" maxConnections="10240" acceptorThreadCount="1" minSpareThreads="1" maxThreads="1" redirectPort="8443" processorCache="1024" URIEncoding="UTF-8" protocol="org.apache.coyote.http11.Http11NioProtocol" enableLookups="false"/>
我们升级到了jdk1.8.0_51 +tomcat 8.0.26,在使用Http11Nio2Protocol时遇到一些问题,暂时还是使用的Http11Nio1Protocol。此处可以看到Tomcat线程池我们配置了maxThreads=1,即一个线程进行请求解析。
4、一些Servlet3异步化压测数据
压测机器基本环境:32核CPU、32G内存;jdk1.7.0_71 + tomcat 7.0.57,服务响应时间在20ms+,使用最简单的单个URL压测吞吐量:
1、使用同步方式压测
- siege-3.0.7]# ./src/siege -c100 -t60s -b http://***.item.jd.com/981821
- Transactions: 279187 hits
- Availability: 100.00 %
- Elapsed time: 59.33 secs
- Data transferred: 1669.41 MB
- Response time: 0.02 secs
- Transaction rate: 4705.66 trans/sec
- Throughput: 28.14 MB/sec
- Concurrency: 99.91
- Successful transactions: 279187
- Failed transactions: 0
- Longest transaction: 1.04
- Shortest transaction: 0.00
siege-3.0.7]# ./src/siege -c100 -t60s -b http://***.item.jd.com/981821 Transactions: 279187 hitsAvailability: 100.00 %Elapsed time: 59.33 secsData transferred: 1669.41 MBResponse time: 0.02 secsTransaction rate: 4705.66 trans/secThroughput: 28.14 MB/secConcurrency: 99.91Successful transactions: 279187Failed transactions: 0Longest transaction: 1.04Shortest transaction: 0.00
2.1、 使用Servlet3异步化压测 100并发、60秒:
- siege-3.0.7]# ./src/siege -c100 -t60s -b http://***.item.jd.com/981821 .
- Transactions: 337998 hits
- Availability: 100.00 %
- Elapsed time: 59.09 secs
- Data transferred: 2021.07 MB
- Response time: 0.03 secs
- Transaction rate: 5720.05 trans/sec
- Throughput: 34.20 MB/sec
- Concurrency: 149.79
- Successful transactions: 337998
- Failed transactions: 0
- Longest transaction: 1.07
- Shortest transaction: 0.00
siege-3.0.7]# ./src/siege -c100 -t60s -b http://***.item.jd.com/981821 .Transactions: 337998 hitsAvailability: 100.00 %Elapsed time: 59.09 secsData transferred: 2021.07 MBResponse time: 0.03 secsTransaction rate: 5720.05 trans/secThroughput: 34.20 MB/secConcurrency: 149.79Successful transactions: 337998Failed transactions: 0Longest transaction: 1.07Shortest transaction: 0.00
2.2、使用Servlet3异步化压测 600并发、60秒:
- siege-3.0.7]# ./src/siege -c600 -t60s -b http://***.item.jd.com/981821
- Transactions: 370985 hits
- Availability: 100.00 %
- Elapsed time: 59.16 secs
- Data transferred: 2218.32 MB
- Response time: 0.10 secs
- Transaction rate: 6270.88 trans/sec
- Throughput: 37.50 MB/sec
- Concurrency: 598.31
- Successful transactions: 370985
- Failed transactions: 0
- Longest transaction: 1.32
- Shortest transaction: 0.00
siege-3.0.7]# ./src/siege -c600 -t60s -b http://***.item.jd.com/981821 Transactions: 370985 hitsAvailability: 100.00 %Elapsed time: 59.16 secsData transferred: 2218.32 MBResponse time: 0.10 secsTransaction rate: 6270.88 trans/secThroughput: 37.50 MB/secConcurrency: 598.31Successful transactions: 370985Failed transactions: 0Longest transaction: 1.32Shortest transaction: 0.00
可以看出异步化之后吞吐量提升了,但是响应时间长了,也就是异步化并不会提升响应时间,但是会增加吞吐量和增加我们需要的灵活性。
通过异步化我们不会获得更快的响应时间,但是我们获得了整体吞吐量和我们需要的灵活性:请求解析和业务处理线程池分离;根据业务重要性对业务分级,并分级线程池;对业务线程池进行监控、运维、降级等处理。
Servlet3相关资料
原文地址:http://jinnianshilongnian.iteye.com/blog/2245925