diff --git a/abl/reasoning/kb.py b/abl/reasoning/kb.py index 0920af2..b1ce152 100644 --- a/abl/reasoning/kb.py +++ b/abl/reasoning/kb.py @@ -429,7 +429,7 @@ class PrologKB(KBBase): def get_query_string(self, pseudo_label, y, revision_idx): """ Get the query to be used for consulting Prolog. - This is a default fuction for demo, users would override this function to adapt to their own + This is a default function for demo, users would override this function to adapt to their own Prolog file. In this demo function, return query `logic_forward([kept_labels, Revise_labels], Res).`. Parameters diff --git a/docs/Intro/Evaluation.rst b/docs/Intro/Evaluation.rst index bf06929..42f21e5 100644 --- a/docs/Intro/Evaluation.rst +++ b/docs/Intro/Evaluation.rst @@ -10,7 +10,7 @@ Evaluation Metrics ================== -ABL-Package seperates the evaluation process as na independent class from the ``BaseBridge`` which accounts for training and testing. To customize our own metrics, we need to inherit from ``BaseMetric`` and implement the ``process`` and ``compute_metrics`` methods. The ``process`` method accepts a batch of model prediction. After processing this batch, we save the information to ``self.results`` property. The input results of ``compute_metrics`` is all the information saved in ``process`` and it uses these information to calculate and return a dict that holds the evaluation results. +ABL-Package seperates the evaluation process as an independent class from the ``BaseBridge`` which accounts for training and testing. To customize our own metrics, we need to inherit from ``BaseMetric`` and implement the ``process`` and ``compute_metrics`` methods. The ``process`` method accepts a batch of model prediction. After processing this batch, we save the information to ``self.results`` property. The input results of ``compute_metrics`` is all the information saved in ``process`` and it uses these information to calculate and return a dict that holds the evaluation results. We provide two basic metrics, namely ``SymbolMetric`` and ``SemanticsMetric``, which are used to evaluate the accuracy of the machine learning model's predictions and the accuracy of the ``logic_forward`` results, respectively. Using ``SymbolMetric`` as an example, the following code shows how to implement a custom metrics. diff --git a/docs/Intro/Quick-Start.rst b/docs/Intro/Quick-Start.rst index 5ecf40f..b7100ba 100644 --- a/docs/Intro/Quick-Start.rst +++ b/docs/Intro/Quick-Start.rst @@ -118,7 +118,7 @@ Afterward, we wrap the ``base_model`` into ``ABLModel``. model = ABLModel(base_model) -Read more about `building the machine learning part `_. +Read more about `building the learning part `_. Building the Reasoning Part --------------------------- diff --git a/docs/Overview/Abductive-Learning.rst b/docs/Overview/Abductive-Learning.rst index ad10455..1ebdf8d 100644 --- a/docs/Overview/Abductive-Learning.rst +++ b/docs/Overview/Abductive-Learning.rst @@ -3,27 +3,26 @@ Abductive Learning Traditional supervised machine learning, e.g. classification, is predominantly data-driven, as shown in the below figure. -Here, a set of data examples is given, -where the input serving as training -instance, and the ouput serving as the corresponding ground-truth -label. These data are then used to train a classifier model :math:`f` -to accurately predict the unseen data input. +Here, a set of data examples is given, including training instances +:math:`\{x_1,\dots,x_m'}` and corresponding ground-truth labels :math:`\{\text{label}(x_1),\dots,\text{label}(x_m)'}`. +These data are then used to train a classifier model :math:`f`, +aiming to accurately predict the unseen data instances. .. image:: ../img/ML.png :align: center :width: 300px -In **Abductive Learning (ABL)**, we assume that, in addition to data as -examples, there is also a knowledge base :math:`\mathcal{KB}` containing +In **Abductive Learning (ABL)**, we assume that, in addition to data, +there is also a knowledge base :math:`\mathcal{KB}` containing domain knowledge at our disposal. We aim for the classifier :math:`f` -to make correct predictions on data input :math:`\{x_1,\dots,x_m\}`, -and meanwhile, the logical facts grounded by +to make correct predictions on data instances :math:`\{x_1,\dots,x_m\}`, +and meanwhile, the logical facts grounded by the prediction :math:`\left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\}` should be compatible with :math:`\mathcal{KB}`. The process of ABL is as follows: -1. Upon receiving data inputs :math:`\left\{x_1,\dots,x_m\right\}`, +1. Upon receiving data instances :math:`\left\{x_1,\dots,x_m\right\}` as input, pseudo-labels :math:`\left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\}` are predicted by a data-driven classifier model.