人人都说Volley写的非常优秀,今天我们就打开volley的源码,来看看volley是怎么实现网络请求的,首先,我们从刚开始使用入手
~~~
mRequestQueue = Volley.newRequestQueue(App.getInstance());
~~~
跟进代码,Volley.newRequestQueue,
~~~
public static RequestQueue newRequestQueue(Context context, HttpStack stack) {
// 缓存目录
File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR);
// 拼装UA
String userAgent = "volley/0";
try {
String packageName = context.getPackageName();
PackageInfo info = context.getPackageManager().getPackageInfo(packageName, 0);
userAgent = packageName + "/" + info.versionCode;
} catch (NameNotFoundException e) {
}
// 如果HttpStack为空
if (stack == null) {
// 判断sdk版本
// HurlStack和HttpClientStack内部分别使用HttpUrlConnection和HttpClient
// 进行网络请求
if (Build.VERSION.SDK_INT >= 9) {
// 使用HttpUrlConnection
stack = new HurlStack();
} else {
// 使用HttpClient
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
// 创建NetWork
Network network = new BasicNetwork(stack);
// 初始化请求队列,注意:**这里并不是一个线程**,并启动它
RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network);
queue.start();
return queue;
}
~~~
这里面初始化了HttpStack, NetWork, RequestQueue, 并startRequestQueue,需要注意的是:**RequestQueue并不是一个线程**
进入RequestQueue.start,
~~~
public void start() {
stop(); // Make sure any currently running dispatchers are stopped.
// Create the cache dispatcher and start it.
mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery);
mCacheDispatcher.start();
// Create network dispatchers (and corresponding threads) up to the pool size.
for (int i = 0; i < mDispatchers.length; i++) {
NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork,
mCache, mDelivery);
mDispatchers[i] = networkDispatcher;
networkDispatcher.start();
}
}
~~~
首先调用stop保证CacheDispatcher和NetworkDispatcher都quit, 然后新建CacheDispatcher并启动,这里是一个线程,内部是一个死循环,然后创建几个(默认4个)NetworkDispatcher,并全部启动,内部同样是死循环。
CacheDispatcher.run,
~~~
@Override
public void run() {
if (DEBUG) VolleyLog.v("start new dispatcher");
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
// Make a blocking call to initialize the cache.
mCache.initialize();
while (true) {
try {
// Get a request from the cache triage queue, blocking until
// at least one is available.
// 从缓存的请求中获取一个请求,如果没有 这里会阻塞
final Request request = mCacheQueue.take();
// 标记:刚从缓存获取的
request.addMarker("cache-queue-take");
// If the request has been canceled, don't bother dispatching it.
// 如果请求被cancel了,结束请求
if (request.isCanceled()) {
request.finish("cache-discard-canceled");
continue;
}
// Attempt to retrieve this item from cache.
// 根据缓存的请求的key试图从本地缓存中获取缓存的http
Cache.Entry entry = mCache.get(request.getCacheKey());
// 如果获取到的本地缓存是null
if (entry == null) {
// 标记缓存丢失
request.addMarker("cache-miss");
// Cache miss; send off to the network dispatcher.
// 并将该请求放到网络请求队列中
mNetworkQueue.put(request);
continue;
}
// If it is completely expired, just send it to the network.
// 如果本地缓存过期
if (entry.isExpired()) {
// 标记过期
request.addMarker("cache-hit-expired");
request.setCacheEntry(entry);
// 并将请求重新放入网络请求队列中
mNetworkQueue.put(request);
continue;
}
// We have a cache hit; parse its data for delivery back to the request.
// 到这里,就是已经本地缓存的http可用
request.addMarker("cache-hit");
// 将数据解析成Response mark
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
request.addMarker("cache-hit-parsed");
// 如果数据不需要重新获取
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
// 直接回调到我们设置的Listener mark
mDelivery.postResponse(request, response);
} else {
// Soft-expired cache hit. We can deliver the cached response,
// but we need to also send the request to the network for
// refreshing.
// 到这里说明我们缓存的http需要刷新
request.addMarker("cache-hit-refresh-needed");
request.setCacheEntry(entry);
// Mark the response as intermediate.
response.intermediate = true;
// 这里将结果回调并且又将请求放到请求队列中 mark
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
}
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
}
}
~~~
这里面的内容虽然多,但是逻辑简单,就是从缓存的请求中获取一个请求,并判断本地是否存在缓存的http并且判断是否过期,根据判断的状态来决定是否将请求放到网络请求队列中等待请求的发起,还是直接从本地缓存的数据中直接获取数据。这里面几个mark的地方,我们过会再看,接下来我们来看看NetworkDispatcher
NetworkDispatcher.run
~~~
@Override
public void run() {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND);
Request request;
while (true) { // 也是一个死循环
try {
// Take a request from the queue.
// 从请求队列中获取一个请求
// 如果请求是一个缓存的请求,
// 则在CacheDispatche通过一系列判断将请求放入网络请求队列
request = mQueue.take();
} catch (InterruptedException e) {
// We may have been interrupted because it was time to quit.
if (mQuit) {
return;
}
continue;
}
try {
request.addMarker("network-queue-take");
// If the request was cancelled already, do not perform the
// network request.
// 请求canceled
if (request.isCanceled()) {
request.finish("network-discard-cancelled");
continue;
}
// Tag the request (if API >= 14)
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.ICE_CREAM_SANDWICH) {
TrafficStats.setThreadStatsTag(request.getTrafficStatsTag());
}
// Perform the network request.
// 通过NetWork.performRequest来真正的请求网络
// 并将分析后的结果封装到networkResponse中返回
// 这里面包含了statusCode,data,headers,notModified
// mark
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
// If the server returned 304 AND we delivered a response already,
// we're done -- don't deliver a second identical response.
if (networkResponse.notModified && request.hasHadResponseDelivered()) {
request.finish("not-modified");
continue;
}
// Parse the response here on the worker thread.
// 这里将返回的结果根据我们使用的不同的request进行解析
// 解析成我们想要的数据(例如JSONObjectRequest是把结果解析成JSONObject)
Response<?> response = request.parseNetworkResponse(networkResponse);
request.addMarker("network-parse-complete");
// Write to cache if applicable.
// TODO: Only update cache metadata instead of entire record for 304s.
// 如果允许缓存,将请求的内容缓存起来
if (request.shouldCache() && response.cacheEntry != null) {
mCache.put(request.getCacheKey(), response.cacheEntry);
request.addMarker("network-cache-written");
}
// Post the response back.
// 标记请求已经投递
request.markDelivered();
// 将结果投递到我们的Listener mark
mDelivery.postResponse(request, response);
} catch (VolleyError volleyError) {
parseAndDeliverNetworkError(request, volleyError);
} catch (Exception e) {
VolleyLog.e(e, "Unhandled exception %s", e.toString());
mDelivery.postError(request, new VolleyError(e));
}
}
}
~~~
现在我们整个流程走通了,但是还是迷迷糊糊的,为什么呢?因为我们还有几个细节的地方没有去看,仔细看注释的话,很多地方我都打了mark标记,接下来我们就一块来看看这些mark的地方实现的细节。
CacheDispatcher.run
~~~
// 将数据解析成Response mark
Response<?> response = request.parseNetworkResponse(new NetworkResponse(entry.data, entry.responseHeaders));
~~~
这里是将数据解析成Response,唉?怎么一下到解析数据了,请求网络还没有呢? 仔细看,这里是缓存的数据。Request是一个接口,我们来到一个它的实现类JSONObjectRequest中来看看,
~~~
...
@Override
protected Response<JSONObject> parseNetworkResponse(NetworkResponse response) {
try {
String jsonString =
new String(response.data, HttpHeaderParser.parseCharset(response.headers));
return Response.success(new JSONObject(jsonString),
HttpHeaderParser.parseCacheHeaders(response));
} catch (UnsupportedEncodingException e) {
return Response.error(new ParseError(e));
} catch (JSONException je) {
return Response.error(new ParseError(je));
}
}
~~~
这里首先将response.data这个byte数组按照http头信息中的charset构造成一个String,然后返回Response.success(),参数是我们new的一个JSONObject,为什么是JSONObject? 别忘了这里是JSONObjectRequest, 回想一下,我们使用JSONObjectRequest的时候,onResponse中是不是直接返回了一个JSONObject。
继续看看Response.success,
~~~
/**Returns a successful response containing the parsed result. */
public static <T> Response<T> success(T result, Cache.Entry cacheEntry) {
return new Response<T>(result, cacheEntry);
}
~~~
用我们传进来的值构造了一个Response对象,看看Response的构造方法,
~~~
private Response(T result, Cache.Entry cacheEntry) {
this.result = result;
this.cacheEntry = cacheEntry;
this.error = null;
}
private Response(VolleyError error) {
this.result = null;
this.cacheEntry = null;
this.error = error;
}
~~~
简答的将数据保存了一下,那我们在使用的时候怎么获取到的数据呢?
~~~
new Response.Listener() {
public void onResponse();
}
~~~
Response肯定有一个Listener和ErrorListener的接口,
~~~
/**Callback interface for delivering parsed responses. */
public interface Listener<T> {
/**Called when a response is received. */
public void onResponse(T response);
}
/**Callback interface for delivering error responses. */
public interface ErrorListener {
/**
* Callback method that an error has been occurred with the
* provided error code and optional user-readable message.
*/
public void onErrorResponse(VolleyError error);
}
~~~
至于何时去回调的接口,我们接着看下一个mark的地方
~~~
// 直接回调到我们设置的Listener mark
mDelivery.postResponse(request, response);
~~~
ResponseDelivery是一个接口,我们去看他的实现类——ExecutorDelivery
这个类的构造有一个handler的参数,
~~~
/**
* Creates a new response delivery interface.
* @param handler {@link Handler} to post responses on
*/
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}
~~~
在哪初始化的呢?在我们初始化RequestQueue的时候其实是去调用了另外一个RequestQueue的构造,
~~~
public RequestQueue(Cache cache, Network network, int threadPoolSize) {
this(cache, network, threadPoolSize,
new ExecutorDelivery(new Handler(Looper.getMainLooper())));
}
public RequestQueue(Cache cache, Network network, int threadPoolSize,
ResponseDelivery delivery) {
mCache = cache;
mNetwork = network;
mDispatchers = new NetworkDispatcher[threadPoolSize];
mDelivery = delivery;
}
~~~
看最后一个参数,就是要找的ResponseDelivery,在new的时候我们是给它了一个handler,该handler指定了Looper使用UI线程上的Looper。
继续看ExecutorDelivery,
~~~
public ExecutorDelivery(final Handler handler) {
// Make an Executor that just wraps the handler.
mResponsePoster = new Executor() {
@Override
public void execute(Runnable command) {
handler.post(command);
}
};
}
~~~
构造上就做了一件事,new了一个Executor,并重写了execute方法,在这里面我们使用UI线程上的handler post了一个Runnable,post到哪了?肯定是到UI线程了!也就是说command的run方法里面的内容是在UI线程中执行的,我们严重怀疑,回调的工作是在这干的!
继续看看我们关心的postResponse方法,
~~~
@Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
~~~
标记了一个request,并且post了new ResponseDeliveryRunnable()【这里前面刚说了】,
ResponseDeliveryRunnable是ExecutorDelivery的一个内部类,来看看干嘛了,
~~~
private class ResponseDeliveryRunnable implements Runnable {
private final Request mRequest;
private final Response mResponse;
private final Runnable mRunnable;
public ResponseDeliveryRunnable(Request request, Response response, Runnable runnable) {
mRequest = request;
mResponse = response;
mRunnable = runnable;
}
@SuppressWarnings("unchecked")
@Override
public void run() {
// If this request has canceled, finish it and don't deliver.
if (mRequest.isCanceled()) {
mRequest.finish("canceled-at-delivery");
return;
}
// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
// If this is an intermediate response, add a marker, otherwise we're done
// and the request can be finished.
if (mResponse.intermediate) {
mRequest.addMarker("intermediate-response");
} else {
mRequest.finish("done");
}
// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) {
mRunnable.run();
}
}
}
~~~
继承了实现了Runnable接口,并且有三个参数,这三个参数从哪来我们是非常关心的,来捋捋吧,
~~~
@Override
public void postResponse(Request<?> request, Response<?> response) {
postResponse(request, response, null);
}
@Override
public void postResponse(Request<?> request, Response<?> response, Runnable runnable) {
request.markDelivered();
request.addMarker("post-response");
mResponsePoster.execute(new ResponseDeliveryRunnable(request, response, runnable));
}
~~~
看来是我们从CacheDispatcher传递过来的,
~~~
if (!entry.refreshNeeded()) {
// Completely unexpired cache hit. Just deliver the response.
mDelivery.postResponse(request, response);
} else {
...
}
~~~
这里我们调用了两个参数的那个方法,所以Runnable为null,requst和response哪来的呢?
~~~
final Request request = mCacheQueue.take();
Response<?> response = request.parseNetworkResponse(
new NetworkResponse(entry.data, entry.responseHeaders));
~~~
看到了吗,熟悉的代码,这里的代码我们已经分析过来,request是我们从队列中获取的,response是我们包装后的结果,至于怎么包装的还急得吗?
~~~
@Override
protected Response<JSONObject> parseNetworkResponse(NetworkResponse response) {
try {
String jsonString =
new String(response.data, HttpHeaderParser.parseCharset(response.headers));
return Response.success(new JSONObject(jsonString),
HttpHeaderParser.parseCacheHeaders(response));
} catch (UnsupportedEncodingException e) {
return Response.error(new ParseError(e));
} catch (JSONException je) {
return Response.error(new ParseError(je));
}
}
~~~
又回来了,终于找到了参数的来源了,好,那我们继续分析代码,在ResponseDeliveryRunnable.run中有这么几行代码,
~~~
// Deliver a normal response or error, depending.
if (mResponse.isSuccess()) {
mRequest.deliverResponse(mResponse.result);
} else {
mRequest.deliverError(mResponse.error);
}
~~~
这里是在UI线程中执行了,利用我们的request去deliverResponse,
~~~
@Override
protected void deliverResponse(T response) {
mListener.onResponse(response);
}
~~~
哈哈,终于看见曙光了,只需要关心mListener是不是我们写的那个参数就ok啦,
~~~
public JsonRequest(String url, String requestBody, Listener<T> listener,
ErrorListener errorListener) {
this(Method.DEPRECATED_GET_OR_POST, url, requestBody, listener, errorListener);
}
public JsonRequest(int method, String url, String requestBody, Listener<T> listener,
ErrorListener errorListener) {
super(method, url, errorListener);
mListener = listener;
mRequestBody = requestBody;
}
~~~
果然是!到现在为止,一个请求如果加入到缓存队列,接着从缓存队列中加入到请求队列,判断该请求是否有对应的本地缓存,包装请求结果,然后各种调用,到最后回调到我们的listener都已经捋清楚了,唯一还没看的就是网络请求部分了,我们继续找mark的地方,
~~~
// 这里将结果回调并且又将请求放到请求队列中 mark
// Post the intermediate response back to the user and have
// the delivery then forward the request along to the network.
mDelivery.postResponse(request, response, new Runnable() {
@Override
public void run() {
try {
mNetworkQueue.put(request);
} catch (InterruptedException e) {
// Not much we can do about this.
}
}
});
~~~
这里执行了那个三个构造的ResponseDeliveryRunnable,mRunnable肯定不为空,所以
~~~
// If we have been provided a post-delivery runnable, run it.
if (mRunnable != null) {
mRunnable.run();
}
~~~
得以执行,也就是说我们重新将request加入到了请求队列中,继续看mark的地方,
NetworkDispatcher.run中,还记得这里干了嘛吗? 这里是不断从mNetworkQueue队列中获取一个request,并且执行,来看我们mark的地方,
~~~
// Perform the network request.
// 通过NetWork.performRequest来真正的请求网络
// 并将分析后的结果封装到networkResponse中返回
// 这里面包含了statusCode,data,headers,notModified
// mark
NetworkResponse networkResponse = mNetwork.performRequest(request);
request.addMarker("network-http-complete");
~~~
NetWork是一个接口,我们来看它的实现类——BasicNetwork,也就是我们在刚开始看到new的那个,BasicNetwork.performRequest
~~~
@Override
public NetworkResponse performRequest(Request<?> request) throws VolleyError {
long requestStart = SystemClock.elapsedRealtime();
while (true) {
HttpResponse httpResponse = null;
byte[] responseContents = null;
Map<String, String> responseHeaders = new HashMap<String, String>();
try {
// Gather headers.
Map<String, String> headers = new HashMap<String, String>();
// 从Cache中获取header,并添加到Map中
addCacheHeaders(headers, request.getCacheEntry());
// 执行网络请求
httpResponse = mHttpStack.performRequest(request, headers);
// 获取状态
StatusLine statusLine = httpResponse.getStatusLine();
int statusCode = statusLine.getStatusCode();
// 将header放到上面定义的responseHeaders中
responseHeaders = convertHeaders(httpResponse.getAllHeaders());
// Handle cache validation.
// 内容没有修改
if (statusCode == HttpStatus.SC_NOT_MODIFIED) {
// 这里构造了NetworkResponse
return new NetworkResponse(HttpStatus.SC_NOT_MODIFIED,
request.getCacheEntry().data, responseHeaders, true);
}
// 将返回的内容转化为byte数组
responseContents = entityToBytes(httpResponse.getEntity());
// if the request is slow, log it.
// 标记访问慢的请求
long requestLifetime = SystemClock.elapsedRealtime() - requestStart;
logSlowRequests(requestLifetime, request, responseContents, statusLine);
if (statusCode != HttpStatus.SC_OK && statusCode != HttpStatus.SC_NO_CONTENT) {
throw new IOException();
}
// 构造NetworkResponse并返回
// 这里面包含状态吗, 返回的内容, header
return new NetworkResponse(statusCode, responseContents, responseHeaders, false);
} catch (SocketTimeoutException e) {
attemptRetryOnException("socket", request, new TimeoutError());
} catch (ConnectTimeoutException e) {
attemptRetryOnException("connection", request, new TimeoutError());
} catch (MalformedURLException e) {
throw new RuntimeException("Bad URL " + request.getUrl(), e);
} catch (IOException e) {
int statusCode = 0;
NetworkResponse networkResponse = null;
if (httpResponse != null) {
statusCode = httpResponse.getStatusLine().getStatusCode();
} else {
throw new NoConnectionError(e);
}
VolleyLog.e("Unexpected response code %d for %s", statusCode, request.getUrl());
if (responseContents != null) {
networkResponse = new NetworkResponse(statusCode, responseContents,
responseHeaders, false);
if (statusCode == HttpStatus.SC_UNAUTHORIZED ||
statusCode == HttpStatus.SC_FORBIDDEN) {
attemptRetryOnException("auth",
request, new AuthFailureError(networkResponse));
} else {
// TODO: Only throw ServerError for 5xx status codes.
throw new ServerError(networkResponse);
}
} else {
throw new NetworkError(networkResponse);
}
}
}
}
~~~
基本的流程看我写的注释,重要的看这里的代码,
~~~
...
httpResponse = mHttpStack.performRequest(request, headers);
...
~~~
还记得HttpStack是什么吗?来回忆一下吧,Volley.newRequestQueue中
~~~
...
if (stack == null) {
if (Build.VERSION.SDK_INT >= 9) {
stack = new HurlStack();
} else {
// Prior to Gingerbread, HttpUrlConnection was unreliable.
// See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
stack = new HttpClientStack(AndroidHttpClient.newInstance(userAgent));
}
}
...
~~~
通过判断SDK的版本来选择使用HttpUrlConnection还是HttpClient,我们来看看HttpClientStack也就是使用HttpClient怎么实现的,
~~~
@Override
public HttpResponse performRequest(Request<?> request, Map<String, String> additionalHeaders)
throws IOException, AuthFailureError {
// 构造请求方式
HttpUriRequest httpRequest = createHttpRequest(request, additionalHeaders);
// 添加从缓存中获取的header
addHeaders(httpRequest, additionalHeaders);
// 添加我们重写getHeaders中自定义的header
addHeaders(httpRequest, request.getHeaders());
// nothing
onPrepareRequest(httpRequest);
// 获取我们重写的getParams方法中的参数
HttpParams httpParams = httpRequest.getParams();
int timeoutMs = request.getTimeoutMs();
// TODO: Reevaluate this connection timeout based on more wide-scale
// data collection and possibly different for wifi vs. 3G.
HttpConnectionParams.setConnectionTimeout(httpParams, 5000);
HttpConnectionParams.setSoTimeout(httpParams, timeoutMs);
// 执行网络请求并返回结果
return mClient.execute(httpRequest);
}
~~~
第一行代码createHttpRequest其实就是根据我们使用的请求方式(GET,POST,PUT…)来构造不同的请求类(HttpGet, HttpPost, HttpPut),接下来是想请求中添加header,添加了两次,第一次是从缓存中获取的header,第二次获取的我们重写`getHeaders`方法中返回的那个map,接下来onPrepareRequest是一个空方法,继续代码,是调用我们重写getParams获取参数,最后执行HttpClient.execute(HttpUriRequest request)执行网络请求,并返回结果。
这样我们终于把Volley整个的请求过程走通了,但是还有一个问题? 我们目前为止看到的RequestQueue和CacheQueue都是空的,并没有往里添加request,那request是什么时候添加的呢?还记得我们怎么往volley添加一个请求吗?
~~~
mRequestQueue.add(request);
~~~
我们来看看这个add方法,
~~~
public Request add(Request request) {
// Tag the request as belonging to this queue and add it to the set of current requests.
// 标记这个请求放入了请求队列中
request.setRequestQueue(this);
synchronized (mCurrentRequests) {
mCurrentRequests.add(request);
}
// Process requests in the order they are added.
request.setSequence(getSequenceNumber());
request.addMarker("add-to-queue");
// If the request is uncacheable, skip the cache queue and go straight to the network.
// 如果请求允许缓存,则添加到缓存队列中
// 并且返回
if (!request.shouldCache()) {
mNetworkQueue.add(request);
return request;
}
// Insert request into stage if there's already a request with the same cache key in flight.
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey();
// 如果有相同的请求正在等待
// 将这个请求放到这个具有相同cacheKey的队列中
// 这个cacheKey其实就是我们访问的url,
// 也就是具有相同url的请求我们放一块
if (mWaitingRequests.containsKey(cacheKey)) {
// There is already a request in flight. Queue up.
Queue<Request> stagedRequests = mWaitingRequests.get(cacheKey);
if (stagedRequests == null) {
stagedRequests = new LinkedList<Request>();
}
stagedRequests.add(request);
mWaitingRequests.put(cacheKey, stagedRequests);
if (VolleyLog.DEBUG) {
VolleyLog.v("Request for cacheKey=%s is in flight, putting on hold.", cacheKey);
}
} else {
// Insert 'null' queue for this cacheKey, indicating there is now a request in
// flight.
// 如果没有,则添加一个null
// 并放到cache队列中
mWaitingRequests.put(cacheKey, null);
mCacheQueue.add(request);
}
return request;
}
}
~~~
为什么要这么麻烦呢? 好几个队列,有点晕了,我们来看看RequestQueue的finish方法,这个方式是在请求结束后调用的,上面的代码中,很多地方地方都调用了Request.finish(tag)方法,
~~~
void finish(final String tag) {
if (mRequestQueue != null) {
mRequestQueue.finish(this);
}
if (MarkerLog.ENABLED) {
final long threadId = Thread.currentThread().getId();
if (Looper.myLooper() != Looper.getMainLooper()) {
// If we finish marking off of the main thread, we need to
// actually do it on the main thread to ensure correct ordering.
Handler mainThread = new Handler(Looper.getMainLooper());
mainThread.post(new Runnable() {
@Override
public void run() {
mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
}
});
return;
}
mEventLog.add(tag, threadId);
mEventLog.finish(this.toString());
} else {
long requestTime = SystemClock.elapsedRealtime() - mRequestBirthTime;
if (requestTime >= SLOW_REQUEST_THRESHOLD_MS) {
VolleyLog.d("%d ms: %s", requestTime, this.toString());
}
}
}
~~~
这里面调用了RequestQueue的finish方法,并将当前request对象传递过去,
~~~
void finish(Request request) {
// Remove from the set of requests currently being processed.
// 从当前将要执行的request队列中移除
synchronized (mCurrentRequests) {
mCurrentRequests.remove(request);
}
if (request.shouldCache()) {
synchronized (mWaitingRequests) {
String cacheKey = request.getCacheKey;
Queue<Request> waitingRequests = mWaitingRequests.remove(cacheKey);
if (waitingRequests != null) {
if (VolleyLog.DEBUG) {
VolleyLog.v("Releasing %d waiting requests for cacheKey=%s.",
waitingRequests.size(), cacheKey);
}
// Process all queued up requests. They won't be considered as in flight, but
// that's not a problem as the cache has been primed by 'request'.
mCacheQueue.addAll(waitingRequests);
}
}
}
}
~~~
这里面将request从mCurrentRequests中移除,并且判断waitingRequests是否含有该url的请求,如果有,则移除,并且将移除的队列全部放到mCacheQueue,为什么要这么干?还记得我们在CacheDispatcher中那一系列判断吗?如果该请求的url已经缓存很有可能直接将结果回调了。这种做法解决了一个很重要的问题,我们连续访问两次同一个url,真正去访问网络的其实就一个,第二次直接从缓存中获取结果了。
ok,到目前为止Volley的过程我们就分析完毕了。