词对齐 - MGIZA++
创始人
2024-05-28 12:46:05
0

文章目录

    • 关于 MGIZA++
      • giza-py
    • 安装 MGIZA++
    • 命令说明
      • mkcls
      • d4norm
      • hmmnorm
      • plain2snt
      • snt2cooc
      • snt2coocrmp
      • snt2plain
      • symal
      • mgiza
        • general parameters:
        • No. of iterations:
        • parameter for various heuristics in GIZA++ for efficient training:
        • parameters for describing the type and amount of output:
        • parameters describing input files:
        • smoothing parameters:
        • parameters modifying the models:
        • parameters modifying the EM-algorithm:


关于 MGIZA++

A word alignment tool based on famous GIZA++, extended to support multi-threading, resume training and incremental training.

  • Github: https://github.com/moses-smt/mgiza

MGiza++是在Giza++基础上扩充的一中多线程Giza++工具。
使用MGiza++时,可以根据自己的机器指定使用几个处理器

Pgiza++是运行在分布式机器上的Giza++工具,使用了 MapReduce 技术的框架。


giza-py

https://github.com/sillsdev/giza-py
giza-py is a simple, Python-based, command-line runner for MGIZA++, a popular tool for building word alignment models.


参考:Moses中模型训练的并行化问题
https://www.52nlp.cn/the-issue-of-parallel-in-moses-model-training


安装 MGIZA++

1、下载 repo https://github.com/moses-smt/mgiza

2、终端进入 mgizapp 文件,输入如下命令:

cmake . 
make
make install

在 bin 目录可以得到下面几个可执行文件

  • hmmnorm
  • mkcls
  • snt2cooc
  • snt2plain
  • d4norm
  • mgiza
  • plain2snt
  • snt2coocrmp
  • symal

命令说明

mkcls

mkcls - a program for making word classes: Usage:

mkcls [-nnum] [-ptrain] [-Vfile] opt

  • -V: output classes (Default: no file)
  • -n: number of optimization runs (Default: 1); larger number => better results
  • -p: filename of training corpus (Default: ‘train’)

Example:

mkcls -c80 -n10 -pin -Vout opt

(generates 80 classes for the corpus ‘in’ and writes the classes in ‘out’)

Literature:
Franz Josef Och: ?Maximum-Likelihood-Sch?tzung von Wortkategorien mit Verfahren der kombinatorischen Optimierung?Studienarbeit, Universit?t Erlangen-N?rnberg, Germany,1995.


d4norm

d4norm vcb1 vcb2 outputFile baseFile [additional1 ]…


hmmnorm

hmmnorm vcb1 vcb2 outputFile baseFile [additional1 ]…


plain2snt

Converts plain text into GIZA++ snt-format.

plain2snt txt1 txt2 [txt3 txt4 -weight w -vcb1 output1.vcb -vcb2 output2.vcb -snt1 output1_output2.snt -snt2 output2_output1.snt]


snt2cooc

Converts GIZA++ snt-format into plain text.

snt2cooc output vcb1 vcb2 snt12


snt2coocrmp

Converts GIZA++ snt-format into plain text.

snt2coocrmp output vcb1 vcb2 snt12


snt2plain

Converts GIZA++ snt-format into plain text.

snt2plain vcb1 vcb2 snt12 output_prefix [ -counts ]


symal

symal [-i=] [-o=] -a=[u|i|g] -d=[yes|no] -b=[yes|no] -f=[yes|no]
Input file or std must be in .bal format (see script giza2bal.pl).


mgiza

Starting MGIZA
Usage:

mgiza [options]


Options (these override parameters set in the config file):

  • --v: print verbose message, Warning this is not very descriptive and not systematic.
  • --NODUMPS: Do not write any files to disk (This will over write dump frequency options).
  • --h[elp]: print this help
  • --p: Use pegging when generating alignments for Model3 training. (Default NO PEGGING)
  • --st: to use a fixed ditribution for the fertility parameters when tranfering from model 2 to model 3 (Default complicated estimation)

general parameters:

-------------------
ml = 101 (maximum sentence length)


No. of iterations:

-------------------
hmmiterations = 5 (mh)
model1iterations = 5 (number of iterations for Model 1)
model2iterations = 0 (number of iterations for Model 2)
model3iterations = 5 (number of iterations for Model 3)
model4iterations = 5 (number of iterations for Model 4)
model5iterations = 0 (number of iterations for Model 5)
model6iterations = 0 (number of iterations for Model 6)


parameter for various heuristics in GIZA++ for efficient training:

------------------------------------------------------------------
countincreasecutoff = 1e-06 (Counts increment cutoff threshold)
countincreasecutoffal = 1e-05 (Counts increment cutoff threshold for alignments in training of fertility models)
mincountincrease = 1e-07 (minimal count increase)
peggedcutoff = 0.03 (relative cutoff probability for alignment-centers in pegging)
probcutoff = 1e-07 (Probability cutoff threshold for lexicon probabilities)
probsmooth = 1e-07 (probability smoothing (floor) value )


parameters for describing the type and amount of output:

-----------------------------------------------------------
compactalignmentformat = 0 (0: detailled alignment format, 1: compact alignment format )
countoutputprefix = (The prefix for output counts)
dumpcount = 0 (Whether we are going to dump count (in addition to) final output?)
dumpcountusingwordstring = 0 (In count table, should actual word appears or just the id? default is id)
hmmdumpfrequency = 0 (dump frequency of HMM)
l = (log file name)
log = 0 (0: no logfile; 1: logfile)
model1dumpfrequency = 0 (dump frequency of Model 1)
model2dumpfrequency = 0 (dump frequency of Model 2)
model345dumpfrequency = 0 (dump frequency of Model 3/4/5)
nbestalignments = 0 (for printing the n best alignments)
nodumps = 0 (1: do not write any files)
o = (output file prefix)
onlyaldumps = 0 (1: do not write any files)
outputpath = (output path)
transferdumpfrequency = 0 (output: dump of transfer from Model 2 to 3)
verbose = 0 (0: not verbose; 1: verbose)
verbosesentence = -10 (number of sentence for which a lot of information should be printed (negative: no output))


parameters describing input files:

----------------------------------
c = (training corpus file name)
d = (dictionary file name)
previousa = (The a-table of previous step)
previousd = (The d-table of previous step)
previousd4 = (The d4-table of previous step)
previousd42 = (The d4-table (2) of previous step)
previoushmm = (The hmm-table of previous step)
previousn = (The n-table of previous step)
previousp0 = (The P0 previous step)
previoust = (The t-table of previous step)
restart = 0 (Restart training from a level,0: Normal restart, from model 1, 1: Model 1, 2: Model 2 Init (Using Model 1 model input and train model 2), 3: Model 2, (using model 2 input and train model 2), 4 : HMM Init (Using Model 1 model and train HMM), 5: HMM (Using Model 2 model and train HMM) 6 : HMM (Using HMM Model and train HMM), 7: Model 3 Init (Use HMM model and train model 3) 8: Model 3 Init (Use Model 2 model and train model 3) 9: Model 3, 10: Model 4 Init (Use Model 3 model and train Model 4) 11: Model 4 and on, )
s = (source vocabulary file name)
sourcevocabularyclasses = (source vocabulary classes file name)
t = (target vocabulary file name)
targetvocabularyclasses = (target vocabulary classes file name)
tc = (test corpus file name)


smoothing parameters:

---------------------
emalsmooth = 0.2 (f-b-trn: smoothing factor for HMM alignment model (can be ignored by -emSmoothHMM))
model23smoothfactor = 0 (smoothing parameter for IBM-2/3 (interpolation with constant))
model4smoothfactor = 0.2 (smooting parameter for alignment probabilities in Model 4)
model5smoothfactor = 0.1 (smooting parameter for distortion probabilities in Model 5 (linear interpolation with constant))
nsmooth = 64 (smoothing for fertility parameters (good value: 64): weight for wordlength-dependent fertility parameters)
nsmoothgeneral = 0 (smoothing for fertility parameters (default: 0): weight for word-independent fertility parameters)


parameters modifying the models:

--------------------------------
compactadtable = 1 (1: only 3-dimensional alignment table for IBM-2 and IBM-3)
deficientdistortionforemptyword = 0 (0: IBM-3/IBM-4 as described in (Brown et al. 1993); 1: distortion model of empty word is deficient; 2: distoriton model of empty word is deficient (differently); setting this parameter also helps to avoid that during IBM-3 and IBM-4 training too many words are aligned with the empty word)
depm4 = 76 (d_{=1}: &1:l, &2:m, &4:F, &8:E, d_{>1}&16:l, &32:m, &64:F, &128:E)
depm5 = 68 (d_{=1}: &1:l, &2:m, &4:F, &8:E, d_{>1}&16:l, &32:m, &64:F, &128:E)
emalignmentdependencies = 2 (lextrain: dependencies in the HMM alignment model. &1: sentence length; &2: previous class; &4: previous position; &8: French position; &16: French class)
emprobforempty = 0.4 (f-b-trn: probability for empty word)


parameters modifying the EM-algorithm:

--------------------------------------
m5p0 = -1 (fixed value for parameter p_0 in IBM-5 (if negative then it is determined in training))
manlexfactor1 = 0 ()
manlexfactor2 = 0 ()
manlexmaxmultiplicity = 20 ()
maxfertility = 10 (maximal fertility for fertility models)
ncpus = 0 (Number of threads to be executed, use 0 if you just want all CPUs to be used)
p0 = -1 (fixed value for parameter p_0 in IBM-3/4 (if negative then it is determined in training))
pegging = 0 (0: no pegging; 1: do pegging)


相关内容

热门资讯

探寻靠谱公司律师:浙江同济律师... 在商业活动与个人生活中,法律问题无处不在,选择一位靠谱的公司律师至关重要。那么,口碑不错的公司律师怎...
新修订的《山西省科学技术进步条... 记者从省科技厅获悉,新修订的《山西省科学技术进步条例》于1月1日起实施。 新修订的《条例》共9章65...
权威解读|增值税法实施条例“护... 2025年12月30日,《中华人民共和国增值税法实施条例》对外发布。这一条例于2026年1月1日起,...
专业税务律师的靠谱之选——程晓... 在企业经营和个人财富管理中,税务问题一直是核心痛点。税务法规复杂多变,税务风险如影随形,这让专业税务...
2026年江西以旧换新政策,明... 广大消费者及参与活动商户: 为接续做好2026年消费品以旧换新政策实施工作,现将有关事项通告如下: ...
山东2026年首个“政策包”来... 12月31日,山东省人民政府网站正式发布《2026年促进经济“稳中求进、提质增效”政策清单(第一批)...
湖南发布青年就业“政策大礼包” 原标题:20万个岗位等你来!湖南发布青年就业“政策大礼包” 发布会现场。 2025年12月31日,...
一省明确:指导高校建立辅导员尽... 近日,教育部网站发布了《陕西省推动高校“第四支队伍”高质量发展》一文。其中提到,陕西省指导高校探索建...
专业税务律师服务哪家强?程晓凤... 在当今复杂的商业环境中,税务问题犹如一团迷雾,困扰着众多企业和个人。税务律师服务的重要性日益凸显,那...
@广东人,省市近20部地方性法... 2026年1月1日起,广东省及深圳、珠海、中山、江门等地市近20部地方性法规开始施行,内容涵盖租赁房...