在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称:yasserg/crawler4j开源软件地址:https://github.com/yasserg/crawler4j开源编程语言:Java 82.9%开源软件介绍:crawler4jcrawler4j is an open source web crawler for Java which provides a simple interface for crawling the Web. Using it, you can setup a multi-threaded web crawler in few minutes. Table of contentInstallationUsing MavenAdd the following dependency to your pom.xml: <dependency>
<groupId>edu.uci.ics</groupId>
<artifactId>crawler4j</artifactId>
<version>4.4.0</version>
</dependency> Using GradleAdd the following dependency to your build.gradle file:
QuickstartYou need to create a crawler class that extends WebCrawler. This class decides which URLs should be crawled and handles the downloaded page. The following is a sample implementation: public class MyCrawler extends WebCrawler {
private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|gif|jpg"
+ "|png|mp3|mp4|zip|gz))$");
/**
* This method receives two parameters. The first parameter is the page
* in which we have discovered this new url and the second parameter is
* the new url. You should implement this function to specify whether
* the given url should be crawled or not (based on your crawling logic).
* In this example, we are instructing the crawler to ignore urls that
* have css, js, git, ... extensions and to only accept urls that start
* with "https://www.ics.uci.edu/". In this case, we didn't need the
* referringPage parameter to make the decision.
*/
@Override
public boolean shouldVisit(Page referringPage, WebURL url) {
String href = url.getURL().toLowerCase();
return !FILTERS.matcher(href).matches()
&& href.startsWith("https://www.ics.uci.edu/");
}
/**
* This function is called when a page is fetched and ready
* to be processed by your program.
*/
@Override
public void visit(Page page) {
String url = page.getWebURL().getURL();
System.out.println("URL: " + url);
if (page.getParseData() instanceof HtmlParseData) {
HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
String text = htmlParseData.getText();
String html = htmlParseData.getHtml();
Set<WebURL> links = htmlParseData.getOutgoingUrls();
System.out.println("Text length: " + text.length());
System.out.println("Html length: " + html.length());
System.out.println("Number of outgoing links: " + links.size());
}
}
} As can be seen in the above code, there are two main functions that should be overridden:
You should also implement a controller class which specifies the seeds of the crawl, the folder in which intermediate crawl data should be stored and the number of concurrent threads: public class Controller {
public static void main(String[] args) throws Exception {
String crawlStorageFolder = "/data/crawl/root";
int numberOfCrawlers = 7;
CrawlConfig config = new CrawlConfig();
config.setCrawlStorageFolder(crawlStorageFolder);
// Instantiate the controller for this crawl.
PageFetcher pageFetcher = new PageFetcher(config);
RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
CrawlController controller = new CrawlController(config, pageFetcher, robotstxtServer);
// For each crawl, you need to add some seed urls. These are the first
// URLs that are fetched and then the crawler starts following links
// which are found in these pages
controller.addSeed("https://www.ics.uci.edu/~lopes/");
controller.addSeed("https://www.ics.uci.edu/~welling/");
controller.addSeed("https://www.ics.uci.edu/");
// The factory which creates instances of crawlers.
CrawlController.WebCrawlerFactory<BasicCrawler> factory = MyCrawler::new;
// Start the crawl. This is a blocking operation, meaning that your code
// will reach the line after this only when crawling is finished.
controller.start(factory, numberOfCrawlers);
}
} More Examples
Configuration DetailsThe controller class has a mandatory parameter of type CrawlConfig. Instances of this class can be used for configuring crawler4j. The following sections describe some details of configurations. Crawl depthBy default there is no limit on the depth of crawling. But you can limit the depth of crawling. For example, assume that you have a seed page "A", which links to "B", which links to "C", which links to "D". So, we have the following link structure: A -> B -> C -> D Since, "A" is a seed page, it will have a depth of 0. "B" will have depth of 1 and so on. You can set a limit on the depth of pages that crawler4j crawls. For example, if you set this limit to 2, it won't crawl page "D". To set the maximum depth you can use: crawlConfig.setMaxDepthOfCrawling(maxDepthOfCrawling); Enable SSLTo enable SSL simply: CrawlConfig config = new CrawlConfig();
config.setIncludeHttpsPages(true); Maximum number of pages to crawlAlthough by default there is no limit on the number of pages to crawl, you can set a limit on this: crawlConfig.setMaxPagesToFetch(maxPagesToFetch); Enable Binary Content CrawlingBy default crawling binary content (i.e. images, audio etc.) is turned off. To enable crawling these files: crawlConfig.setIncludeBinaryContentInCrawling(true); See an example here for more details. Politenesscrawler4j is designed very efficiently and has the ability to crawl domains very fast (e.g., it has been able to crawl 200 Wikipedia pages per second). However, since this is against crawling policies and puts huge load on servers (and they might block you!), since version 1.3, by default crawler4j waits at least 200 milliseconds between requests. However, this parameter can be tuned: crawlConfig.setPolitenessDelay(politenessDelay); ProxyShould your crawl run behind a proxy? If so, you can use: crawlConfig.setProxyHost("proxyserver.example.com");
crawlConfig.setProxyPort(8080); If your proxy also needs authentication: crawlConfig.setProxyUsername(username);
crawlConfig.setProxyPassword(password); Resumable CrawlingSometimes you need to run a crawler for a long time. It is possible that the crawler terminates unexpectedly. In such cases, it might be desirable to resume the crawling. You would be able to resume a previously stopped/crashed crawl using the following settings: crawlConfig.setResumableCrawling(true); However, you should note that it might make the crawling slightly slower. User agent stringUser-agent string is used for representing your crawler to web servers. See here for more details. By default crawler4j uses the following user agent string:
However, you can overwrite it: crawlConfig.setUserAgentString(userAgentString); LicenseCopyright (c) 2010-2018 Yasser Ganjisaffar Published under Apache License 2.0, see LICENSE |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论