在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):NikolaiT/GoogleScraper开源软件地址(OpenSource Url):https://github.com/NikolaiT/GoogleScraper开源编程语言(OpenSource Language):HTML 89.8%开源软件介绍(OpenSource Introduction):The maintained successor of GoogleScraper is the general purpose crawling infrastructureGoogleScraper - Scraping search engines professionallyScrapeulous.com - Scraping ServiceGoogleScraper is a open source tool and will remain a open source tool in the future. Also the modern successor of GoogleScraper, the general purpose crawling infrastructure, will remain open source and free. Some people however would want to quickly have a service that lets them scrape some data from Google or any other search engine. For this reason, I created the web service scrapeulous.com. Switching from Python to Javascript/puppeteerLast State: Feburary 2019 The successor of GoogleScraper can be found here This means that I won't maintain this project anymore. All new development goes in the above project. There are several reasons why I won't continue to put much effort into this project.
For this reason I am going to continue developing a scraping library named https://www.npmjs.com/package/se-scraper in Javascript which runs on top of puppeteer. You can download the app here: https://www.npmjs.com/package/se-scraper It supports a wide range of different search engines and is much more efficient than GoogleScraper. The code base is also much less complex without threading/queueing and complex logging capabilities. August/September 2018For questions you can contact me on my wegpage and write me an email there. This project is back to live after two years of abandonment. In the coming weeks, I will take some time to update all functionality to the most recent developments. This encompasses updating all Regexes and changes in search engine behavior. After a couple of weeks, you can expect this project to work again as documented here. Table of ContentsInstallationGoogleScraper is written in Python 3. You should install at least Python 3.6. The last major development was all done with Python 3.7. So when using Ubuntu 16.04 and Python 3.7 for instance, please install Python 3 from the official packages. I use the Anaconda Python distribution, which does work very well for me. Furthermore, you need to install the Chrome Browser and also the ChromeDriver for Selenium mode. Alternatively install the Firefox Browser and the geckodriver for Selenium Mode. See instructions below. You can also install GoogleScraper comfortably with pip:
Right now (September 2018) this is discouraged. Please install from latest Github sources. Alternatively install directly from GithubSometimes the newest and most awesome stuff is not available in the cheeseshop (That's how they call https://pypi.python.org/pypi/pip). Therefore you maybe want to install GoogleScraper from the latest source that resides in this Github repository. You can do so like this:
Please note that some features and examples might not work as expected. I also don't guarantee that the app even runs. I only guarantee (to a certain degree at least) that installing from pip will yield a usable version. ChromedriverDownload the latest chromedriver from here: https://sites.google.com/a/chromium.org/chromedriver/downloads Unzip the driver and save it somewhere and then update the GeckodriverDownload the latest geckodriver from here: https://github.com/mozilla/geckodriver/releases Unzip the driver and save it somewhere and then update the Update the settings for selenium and firefox/chromeUpdate the following settings in the GoogleScraper configuration file
Quick StartInstall as described above. Make sure that you have the selenium drivers for chrome/firefox if you want to use GoogleScraper in selenium mode. See all options
Scrape the single keyword "apple" with http mode:
Scrape all keywords that are in the file
Scrape all keywords that are in
Do an image search for the keyword "K2 mountain" on google:
Asynchronous modeThis is probably the most awesome feature of GoogleScraper. You can scrape with thousands of requests per second if either
Example for Asynchronous mode: Search the keywords in the keyword file SearchData/marketing-models-brands.txt on bing and yahoo. By default asynchronous mode spawns 100 requests at the same time. This means around 100 requests per second (depends on the actual connection...).
The results (partial results, because there were too many keywords for one IP address) can be inspected in the file Outputs/marketing.json. Testing GoogleScraperGoogleScraper is hugely complex. Because GoogleScraper supports many search engines and the HTML and Javascript of those Search Providers changes frequently, it is often the case that GoogleScraper ceases to function for some search engine. To spot this, you can run functional tests. For example the test below runs a scraping session for Google and Bing and tests that the gathered data looks more or less okay.
What does GoogleScraper.py?GoogleScraper parses Google search engine results (and many other search engines _) easily and in a fast way. It allows you to extract all found links and their titles and descriptions programmatically which enables you to process scraped data further. There are unlimited usage scenarios:
First of all you need to understand that GoogleScraper uses two completely different scraping approaches:
Whereas the former approach was implemented first, the later approach looks much more promising in comparison, because search engines have no easy way detecting it. GoogleScraper is implemented with the following techniques/software:
What search engines are suppported ?Currently the following search engines are supported:
How does GoogleScraper maximize the amount of extracted information per IP address?Scraping is a critical and highly complex subject. Google and other search engine giants have a strong inclination to make the scrapers life as hard as possible. There are several ways for the search engine providers to detect that a robot is using their search engine:
So the biggest hurdle to tackle is the javascript detection algorithms. I don't know what Google does in their javascript, but I will soon investigate it further and then decide if it's not better to change strategies and switch to a approach that scrapes by simulating browsers in a browserlike environment that can execute javascript. The networking of each of these virtual browsers is proxified and manipulated such that it behaves like a real physical user agent. I am pretty sure that it must be possible to handle 20 such browser sessions in a parallel way without stressing resources too much. The real problem is as always the lack of good proxies... How to overcome difficulties of low level (http) scraping?As mentioned above, there are several drawbacks when scraping with Browsers are ENORMOUSLY complex software systems. Chrome has around 8 millions line of code and firefox even 10 LOC. Huge companies invest a lot of money to push technology forward (HTML5, CSS3, new standards) and each browser has a unique behaviour. Therefore it's almost impossible to simulate such a browser manually with HTTP requests. This means Google has numerous ways to detect anomalies and inconsistencies in the browsing usage. Alone the dynamic nature of Javascript makes it impossible to scrape undetected. This cries for an alternative approach, that automates a real browser with Python. Best would be to control the Chrome browser since Google has the least incentives to restrict capabilities for their own native browser. Hence I need a way to automate Chrome with Python and controlling several independent instances with different proxies set. Then the output of result grows linearly with the number of used proxies... Some interesting technologies/software to do so: More detailed ExplanationProbably the best way to use GoogleScraper is to use it from the command line and fire a command such as the following:
Here sel marks the scraping mode as 'selenium'. This means GoogleScraper.py scrapes with real browsers. This is pretty powerful, since
you can scrape long and a lot of sites (Google has a hard time blocking real browsers). The argument of the flag Furthermore, the option Example keyword-file:
After the scraping you'll automatically have a new sqlite3 database in the named
It shouldn't be a problem to scrape 10'000 keywords in 2 hours. If you are really crazy, set the maximal browsers in the config a little bit higher (in the top of the script file). If you want, you can specify the flag
Example:
In case you want to use GoogleScraper.py in http mode (which means that raw http headers are sent), use it as follows:
ContactIf you feel like contacting me, do so and send me a mail. You can find my contact information on my blog. |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论