diff --git a/docs/Intro/Basics.rst b/docs/Intro/Basics.rst index 6017d69..051f078 100644 --- a/docs/Intro/Basics.rst +++ b/docs/Intro/Basics.rst @@ -1,6 +1,51 @@ Learn the Basics ================ +Modules in ABL-Package +---------------------- + +The ABL-Package is an implementation of Abductive Learning, designed to +harmoniously integrate and balance the use of machine learning and +logical reasoning within a unified model. As depicted below, the +ABL-Package comprises three primary modules: **Data**, **Learning**, and +**Reasoning**, corresponding to the three pivotal components in current +AI: data, models, and knowledge. + +.. image:: ../img/ABL-Package.png + +**Data** module manages the storage, operation, and evaluation of data. +It first features class ``ListData`` (inherited from base class +``BaseDataElement``), which defines the data structures used in +Abductive Learning, and comprises common data operations like addition, +deletion, retrieval, and slicing. Additionally, a series of Evaluation +Metrics, including class ``SymbolMetric`` and ``SemanticsMetric`` (both +specialized metrics derived from base class ``BaseMetric``), outline +methods for evaluating model quality from a data perspective. + +**Learning** module is responsible for the construction, deployment, and +training of machine learning models. In this module, the class +``ABLModel`` is the central class that encapsulates the machine learning +model, which may incorporate models such as those based on Scikit-learn +or a neural network framework using constructed by class ``BasicNN``. + +**Reasoning** module consists of the reasoning part of the Abductive +learning. The class ``KBBase`` allows users to instantiate domain +knowledge base. For diverse types of knowledge, we also offer +implementations like ``GroundKB`` and ``PrologKB``, e.g., the latter +enables knowledge base to be imported in the form of a Prolog files. +Upon building the knowledge base, the class ``ReasonerBase`` is +responsible for minimizing the inconsistency between the knowledge base +and learning models. + +Finally, the integration of these three modules occurs through +**Bridge** module, which featurs class ``SimpleBridge`` (inherited from base +class ``BaseBridge``). Bridge module synthesize data, learning, and +reasoning, and facilitate the training and testing of the entire +Abductive Learning framework. + +Use ABL-Package Step by Step +---------------------------- + In a typical Abductive Learning process, as illustrated below, data inputs are first mapped to pseudo labels through a machine learning model. These pseudo labels then pass through a knowledge base :math:`\mathcal{KB}` @@ -13,7 +58,7 @@ which in turn revise the outcomes of the machine learning model, and then fed back into the machine learning model for further training. To implement this process, the following four steps are necessary: -.. image:: ../img/ABL-Package.png +.. image:: ../img/usage.png 1. Prepare datasets diff --git a/docs/Overview/Abductive Learning.rst b/docs/Overview/Abductive Learning.rst index b336dd0..94af509 100644 --- a/docs/Overview/Abductive Learning.rst +++ b/docs/Overview/Abductive Learning.rst @@ -3,21 +3,20 @@ Abductive Learning Traditional supervised machine learning, e.g. classification, is predominantly data-driven, as shown in the below figure. -Here, a set of training examples :math:`\left\{\left(x_1, y_1\right), -\ldots,\left(x_m, y_m\right)\right\}` is given, -where :math:`x_i \in \mathcal{X}` is the :math:`i`-th training -instance, :math:`y_i \in \mathcal{Y}` is the corresponding ground-truth -label. These data are then used to train a classifier model :math:`f: -\mathcal{X} \mapsto \mathcal{Y}` to accurately predict the unseen data. +Here, a set of data examples is given, +where the input serving as training +instance, and the ouput serving as the corresponding ground-truth +label. These data are then used to train a classifier model :math:`f` +to accurately predict the unseen data input. .. image:: ../img/ML.png :width: 600px In **Abductive Learning (ABL)**, we assume that, in addition to data as examples, there is also a knowledge base :math:`\mathcal{KB}` containing -domain knowledge at our disposal. We aim for the classifier :math:`f: -\mathcal{X} \mapsto \mathcal{Y}` to make correct predictions on unseen -data, and meanwhile, the logical facts grounded by +domain knowledge at our disposal. We aim for the classifier :math:`f` +to make correct predictions on data input :math:`\{x_1,\dots,x_m\}`, +and meanwhile, the logical facts grounded by :math:`\left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\}` should be compatible with :math:`\mathcal{KB}`. diff --git a/docs/img/ABL-Package.png b/docs/img/ABL-Package.png index bce7659..c20a5ec 100644 Binary files a/docs/img/ABL-Package.png and b/docs/img/ABL-Package.png differ diff --git a/docs/img/usage.png b/docs/img/usage.png new file mode 100644 index 0000000..bce7659 Binary files /dev/null and b/docs/img/usage.png differ