close

When we crawl the content from the URL , sometimes the process will become zombie and the task will

be stoped by some internet connection reason.

How can we avoid this situation?

We can set the time slice , that is read time or connection time , when the time slice is expired and the 

task is not finished , then the program will throw a exception to stop the connection.

the example is as below:

 

try

{

URL url = new URL("http://xxx.com");

URLConnection connection = url.openConnection();

HttpURLConnection httpConn = (HttpURLConnection)connection;

httpConn.setReadTimeout(10000);

httpConn.setConnectTimeout(10000);

BufferedReader in = new BufferedReader(new InputStreamReader(httpConn.getInputStream(),"UTF-8"));

}

catch(IOException e)

{

System.out.println(e);


}

catch(MalformedURLException e)

{

System.out.println(e);


 

Attention:

1. the unit of the time is millisecond , so 10000 means 10 seconds.

2. We use httpConn.getInputStream (of HttpURLConnection class) instead of url.openStream() (of URL class);

3. The setting should be setted before we open the inputStream.

arrow
arrow
    全站熱搜
    創作者介紹
    創作者 JerryCheng 的頭像
    JerryCheng

    KwCheng's blog

    JerryCheng 發表在 痞客邦 留言(0) 人氣()