Browse Source

[DOC] minor fix

pull/1/head
troyyyyy 2 years ago
parent
commit
0bb28a4103
4 changed files with 12 additions and 13 deletions
  1. +1
    -1
      abl/reasoning/kb.py
  2. +1
    -1
      docs/Intro/Evaluation.rst
  3. +1
    -1
      docs/Intro/Quick-Start.rst
  4. +9
    -10
      docs/Overview/Abductive-Learning.rst

+ 1
- 1
abl/reasoning/kb.py View File

@@ -429,7 +429,7 @@ class PrologKB(KBBase):
def get_query_string(self, pseudo_label, y, revision_idx):
"""
Get the query to be used for consulting Prolog.
This is a default fuction for demo, users would override this function to adapt to their own
This is a default function for demo, users would override this function to adapt to their own
Prolog file. In this demo function, return query `logic_forward([kept_labels, Revise_labels], Res).`.
Parameters


+ 1
- 1
docs/Intro/Evaluation.rst View File

@@ -10,7 +10,7 @@
Evaluation Metrics
==================

ABL-Package seperates the evaluation process as na independent class from the ``BaseBridge`` which accounts for training and testing. To customize our own metrics, we need to inherit from ``BaseMetric`` and implement the ``process`` and ``compute_metrics`` methods. The ``process`` method accepts a batch of model prediction. After processing this batch, we save the information to ``self.results`` property. The input results of ``compute_metrics`` is all the information saved in ``process`` and it uses these information to calculate and return a dict that holds the evaluation results.
ABL-Package seperates the evaluation process as an independent class from the ``BaseBridge`` which accounts for training and testing. To customize our own metrics, we need to inherit from ``BaseMetric`` and implement the ``process`` and ``compute_metrics`` methods. The ``process`` method accepts a batch of model prediction. After processing this batch, we save the information to ``self.results`` property. The input results of ``compute_metrics`` is all the information saved in ``process`` and it uses these information to calculate and return a dict that holds the evaluation results.

We provide two basic metrics, namely ``SymbolMetric`` and ``SemanticsMetric``, which are used to evaluate the accuracy of the machine learning model's predictions and the accuracy of the ``logic_forward`` results, respectively. Using ``SymbolMetric`` as an example, the following code shows how to implement a custom metrics.



+ 1
- 1
docs/Intro/Quick-Start.rst View File

@@ -118,7 +118,7 @@ Afterward, we wrap the ``base_model`` into ``ABLModel``.

model = ABLModel(base_model)

Read more about `building the machine learning part <Learning.html>`_.
Read more about `building the learning part <Learning.html>`_.

Building the Reasoning Part
---------------------------


+ 9
- 10
docs/Overview/Abductive-Learning.rst View File

@@ -3,27 +3,26 @@ Abductive Learning

Traditional supervised machine learning, e.g. classification, is
predominantly data-driven, as shown in the below figure.
Here, a set of data examples is given,
where the input serving as training
instance, and the ouput serving as the corresponding ground-truth
label. These data are then used to train a classifier model :math:`f`
to accurately predict the unseen data input.
Here, a set of data examples is given, including training instances
:math:`\{x_1,\dots,x_m'}` and corresponding ground-truth labels :math:`\{\text{label}(x_1),\dots,\text{label}(x_m)'}`.
These data are then used to train a classifier model :math:`f`,
aiming to accurately predict the unseen data instances.

.. image:: ../img/ML.png
:align: center
:width: 300px

In **Abductive Learning (ABL)**, we assume that, in addition to data as
examples, there is also a knowledge base :math:`\mathcal{KB}` containing
In **Abductive Learning (ABL)**, we assume that, in addition to data,
there is also a knowledge base :math:`\mathcal{KB}` containing
domain knowledge at our disposal. We aim for the classifier :math:`f`
to make correct predictions on data input :math:`\{x_1,\dots,x_m\}`,
and meanwhile, the logical facts grounded by
to make correct predictions on data instances :math:`\{x_1,\dots,x_m\}`,
and meanwhile, the logical facts grounded by the prediction
:math:`\left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\}`
should be compatible with :math:`\mathcal{KB}`.

The process of ABL is as follows:

1. Upon receiving data inputs :math:`\left\{x_1,\dots,x_m\right\}`,
1. Upon receiving data instances :math:`\left\{x_1,\dots,x_m\right\}` as input,
pseudo-labels
:math:`\left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\}`
are predicted by a data-driven classifier model.


Loading…
Cancel
Save