# Handwritten Formula This example shows a simple implementation of [Handwritten Formula](https://arxiv.org/abs/2006.06649) task, where handwritten images of decimal formulas and their computed results are given, alongwith a domain knowledge base containing information on how to compute the decimal formula. The task is to recognize the symbols (which can be digits or operators '+', '-', '×', '÷') of handwritten images and accurately determine their results. ## Run ```bash pip install -r requirements.txt python main.py ``` ## Usage ```bash usage: main.py [-h] [--no-cuda] [--epochs EPOCHS] [--label_smoothing LABEL_SMOOTHING] [--lr LR] [--batch-size BATCH_SIZE] [--loops LOOPS] [--segment_size SEGMENT_SIZE] [--save_interval SAVE_INTERVAL] [--max-revision MAX_REVISION] [--require-more-revision REQUIRE_MORE_REVISION] [--ground] [--max-err MAX_ERR] Handwritten Formula example optional arguments: -h, --help show this help message and exit --no-cuda disables CUDA training --epochs EPOCHS number of epochs in each learning loop iteration (default : 1) --label_smoothing LABEL_SMOOTHING label smoothing in cross entropy loss (default : 0.2) --lr LR base model learning rate (default : 0.001) --batch-size BATCH_SIZE base model batch size (default : 32) --loops LOOPS number of loop iterations (default : 5) --segment_size SEGMENT_SIZE segment size (default : 1/3) --save_interval SAVE_INTERVAL save interval (default : 1) --max-revision MAX_REVISION maximum revision in reasoner (default : -1) --require-more-revision REQUIRE_MORE_REVISION require more revision in reasoner (default : 0) --ground use GroundKB (default: False) --max-err MAX_ERR max tolerance during abductive reasoning (default : 1e-10) ``` ## Environment For all experiments, we used a single linux server. Details on the specifications are listed in the table below.
CPU GPU Memory OS
2 * Xeon Platinum 8358, 32 Cores, 2.6 GHz Base Frequency A100 80GB 512GB Ubuntu 20.04
## Performance We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), training time (to achieve the accuracy using all equation lengths), and average memory usage (using all equation lengths). These results are compared with the following methods: - [**NGS**](https://github.com/liqing-ustc/NGS): A neural-symbolic framework that uses a grammar model and a back-search algorithm to improve its computing process; - [**DeepProbLog**](https://github.com/ML-KULeuven/deepproblog/tree/master): An extension of ProbLog by introducing neural predicates in Probabilistic Logic Programming; - [**DeepStochLog**](https://github.com/ML-KULeuven/deepstochlog/tree/main): A neural-symbolic framework based on stochastic logic program.
Reasoning Accuracy
(for different equation lengths)
Training Time (s)
(to achieve the Acc. using all lengths)
Average Memory Usage (MB)
(using all lengths)
1 3 5 7 All
NGS 91.2 89.1 92.7 5.2 98.4 426.2 3705
DeepProbLog 90.8 85.6 timeout* timeout timeout timeout 4315
DeepStochLog 92.8 87.5 92.1 timeout timeout timeout 4355
ABL 94.0 89.7 96.5 97.2 99.2 77.3 3074

* timeout: need more than 1 hour to execute