Few points about your question regarding crawling and wikipedia.
You have linked to the wikipedia data dumps and you can use the Cloud9 project from UMD to work with this data in Hadoop.
They have a page on this: Working with Wikipedia
Another datasource to add to the list is:
- ClueWeb09 - 1 billion webpages collected between Jan and Feb 09. 5TB Compressed.
Using a crawler to generate data should be posted in a separate question to one about Hadoop/MapReduce I would say.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…