When I extracted Wikipedia corpus, I first used the wikiextractor. Later, I found that it was always wrong, so it was useless. Since many people asked me how to extract the corpus, I now publish the code
I didn’t write the code, but I found it from a website. Because it took too long and I forgot the address of the website, I can’t post the original URL. If the author sees it, please send a private letter to my original URL
The author’s email address is: [email protected]
How to use: enter the command at the command line:
python data_pre_process.py zhwiki-latest-pages-articles.xml.bz2(Wikipedia Corpus) wiki.zh.text(Saved files)
Source code:
# -*- coding: utf-8 -*-
# Author: Pan Yang ([email protected])
# Copyrigh 2017
from __future__ import print_function
import logging
import os.path
import six
import sys
from IPython.core.page import page
from gensim.corpora import WikiCorpus
page.encoding = 'utf-8'
# Wrapping the Wikipedia xml corpus into txt format
# python data_pre_process.py zhwiki-latest-pages-articles.xml.bz2 wiki.zh.text
if __name__ == '__main__':
program = os.path.basename(sys.argv[0])
logger = logging.getLogger(program)
logging.basicConfig(format='%(asctime)s: %(levelname)s: %(message)s')
logging.root.setLevel(level=logging.INFO)
logger.info("running %s" % ' '.join(sys.argv))
# check and process input arguments
if len(sys.argv) != 3:
print("Using: python process_wiki.py enwiki.xxx.xml.bz2 wiki.en.text")
sys.exit(1)
inp, outp = sys.argv[1:3]
space = " "
i = 0
output = open(outp, 'w', encoding='utf-8')
wiki = WikiCorpus(inp, dictionary={})
for text in wiki.get_texts():
if six.PY3:
output.write(bytes(' '.join(text), 'utf-8').decode('utf-8') + '\n')
# ###another method
# output.write(space.join(map(lambda x: x.decode("utf-8"), str(text))) + '\n')
else:
output.write(space.join(text) + "\n")
i = i + 1
if i % 10000 == 0:
logger.info("Saved " + str(i) + " articles")
output.close()
logger.info("Finished Saved " + str(i) + " articles")