Browse Source

[DOC] change code block

pull/6/head
troyyyyy 2 years ago
parent
commit
7894167737
4 changed files with 51 additions and 51 deletions
  1. +11
    -11
      docs/Examples/HED.rst
  2. +14
    -14
      docs/Examples/HWF.rst
  3. +14
    -14
      docs/Examples/MNISTAdd.rst
  4. +12
    -12
      docs/Examples/Zoo.rst

+ 11
- 11
docs/Examples/HED.rst View File

@@ -25,7 +25,7 @@ explanations to the observed facts, suggesting some pseudo-labels to be
revised. This process enables us to further update the machine learning
model.

.. code:: ipython3
.. code:: python

# Import necessary libraries and modules
import os.path as osp
@@ -48,7 +48,7 @@ Working with Data

First, we get the datasets of handwritten equations:

.. code:: ipython3
.. code:: python

total_train_data = get_dataset(train=True)
train_data, val_data = split_equation(total_train_data, 3, 1)
@@ -56,7 +56,7 @@ First, we get the datasets of handwritten equations:

The datasets are shown below:

.. code:: ipython3
.. code:: python

true_train_equation = train_data[1]
false_train_equation = train_data[0]
@@ -100,7 +100,7 @@ Out:

As illustrations, we show four equations in the training dataset:

.. code:: ipython3
.. code:: python

true_train_equation_with_length_5 = true_train_equation[5]
true_train_equation_with_length_8 = true_train_equation[8]
@@ -176,7 +176,7 @@ object to create the base model. ``BasicNN`` is a class that
encapsulates a PyTorch model, transforming it into a base model with an
sklearn-style interface.

.. code:: ipython3
.. code:: python

# class of symbol may be one of ['0', '1', '+', '='], total of 4 classes
cls = SymbolNet(num_classes=4)
@@ -200,7 +200,7 @@ data (i.e., a list of images comprising the equation). Therefore, we
wrap the base model into ``ABLModel``, which enables the learning part
to train, test, and predict on example-level data.

.. code:: ipython3
.. code:: python

model = ABLModel(base_model)

@@ -223,7 +223,7 @@ The knowledge base is already built in ``HedKB``.
``HedKB`` is derived from class ``PrologKB``, and is built upon the aformentioned Prolog
files.

.. code:: ipython3
.. code:: python

kb = HedKB()

@@ -253,7 +253,7 @@ Additionally, methods for abducing rules from data have been
incorporated. Users interested can refer to the specific implementation
of ``HedReasoner`` in ``reasoning/reasoning.py``.

.. code:: ipython3
.. code:: python

reasoner = HedReasoner(kb, dist_func="hamming", use_zoopt=True, max_revision=10)

@@ -267,7 +267,7 @@ used to evaluate the accuracy of the machine learning model’s
predictions and the accuracy of the final reasoning results,
respectively.

.. code:: ipython3
.. code:: python

# Set up metrics
metric_list = [SymbolAccuracy(prefix="hed"), ReasoningMetric(kb=kb, prefix="hed")]
@@ -279,13 +279,13 @@ Now, the last step is to bridge the learning and reasoning part. We
proceed with this step by creating an instance of ``HedBridge``, which is
derived from ``SimpleBridge`` and tailored specific for this task.

.. code:: ipython3
.. code:: python

bridge = HedBridge(model, reasoner, metric_list)

Perform pretraining, training and testing by invoking the ``pretrain``, ``train`` and ``test`` methods of ``HedBridge``.

.. code:: ipython3
.. code:: python

# Build logger
print_log("Abductive Learning on the HED example.", logger="current")


+ 14
- 14
docs/Examples/HWF.rst View File

@@ -22,7 +22,7 @@ and revise the initial symbols yielded by the learning part through
abductive reasoning. This process enables us to further update the
machine learning model.

.. code:: ipython3
.. code:: python

# Import necessary libraries and modules
import os.path as osp
@@ -46,7 +46,7 @@ Working with Data

First, we get the training and testing datasets:

.. code:: ipython3
.. code:: python

train_data = get_dataset(train=True, get_pseudo_label=True)
test_data = get_dataset(train=False, get_pseudo_label=True)
@@ -62,7 +62,7 @@ The length and structures of datasets are illustrated as follows.
``gt_pseudo_label`` is only used to evaluate the performance of
the learning part but not to train the model.

.. code:: ipython3
.. code:: python

print(f"Both train_data and test_data consist of 3 components: X, gt_pseudo_label, Y")
print()
@@ -102,7 +102,7 @@ The ith element of X, gt_pseudo_label, and Y together constitute the ith
data example. Here we use two of them (the 1001st and the 3001st) as
illstrations:

.. code:: ipython3
.. code:: python

X_1000, gt_pseudo_label_1000, Y_1000 = train_X[1000], train_gt_pseudo_label[1000], train_Y[1000]
print(f"X in the 1001st data example (a list of images):")
@@ -175,7 +175,7 @@ object to create the base model. ``BasicNN`` is a class that
encapsulates a PyTorch model, transforming it into a base model with an
sklearn-style interface.

.. code:: ipython3
.. code:: python

# class of symbol may be one of ['1', ..., '9', '+', '-', '*', '/'], total of 14 classes
cls = SymbolNet(num_classes=13, image_size=(45, 45, 1))
@@ -196,7 +196,7 @@ sklearn-style interface.
are used to predict the class index and the probabilities of each class
for images. As shown below:

.. code:: ipython3
.. code:: python

data_instances = [torch.randn(1, 45, 45) for _ in range(32)]
pred_idx = base_model.predict(X=data_instances)
@@ -221,7 +221,7 @@ data (i.e., a list of images comprising the formula). Therefore, we wrap
the base model into ``ABLModel``, which enables the learning part to
train, test, and predict on example-level data.

.. code:: ipython3
.. code:: python

model = ABLModel(base_model)

@@ -231,7 +231,7 @@ method accepts data examples as input and outputs the class labels and
the probabilities of each class for all instances within these data
examples.

.. code:: ipython3
.. code:: python

from ablkit.data.structures import ListData
# ListData is a data structure provided by ABL Kit that can be used to organize data examples
@@ -274,7 +274,7 @@ initialize the ``pseudo_label_list`` parameter specifying list of
possible pseudo-labels, and override the ``logic_forward`` function
defining how to perform (deductive) reasoning.

.. code:: ipython3
.. code:: python

class HwfKB(KBBase):
def __init__(self, pseudo_label_list=["1", "2", "3", "4", "5", "6", "7", "8", "9", "+", "-", "*", "/"]):
@@ -303,7 +303,7 @@ reasoning and abductive reasoning). Below is an example of performing
(deductive) reasoning, and users can refer to :ref:`Performing abductive
reasoning in the knowledge base <kb-abd>` for details of abductive reasoning.

.. code:: ipython3
.. code:: python

pseudo_labels = ["1", "-", "2", "*", "5"]
reasoning_result = kb.logic_forward(pseudo_labels)
@@ -338,7 +338,7 @@ can minimize inconsistencies between the knowledge base and
pseudo-labels predicted by the learning part, and then return only one
candidate that has the highest consistency.

.. code:: ipython3
.. code:: python

reasoner = Reasoner(kb)

@@ -366,7 +366,7 @@ used to evaluate the accuracy of the machine learning model’s
predictions and the accuracy of the final reasoning results,
respectively.

.. code:: ipython3
.. code:: python

metric_list = [SymbolAccuracy(prefix="hwf"), ReasoningMetric(kb=kb, prefix="hwf")]

@@ -376,14 +376,14 @@ Bridging Learning and Reasoning
Now, the last step is to bridge the learning and reasoning part. We
proceed with this step by creating an instance of ``SimpleBridge``.

.. code:: ipython3
.. code:: python

bridge = SimpleBridge(model, reasoner, metric_list)

Perform training and testing by invoking the ``train`` and ``test``
methods of ``SimpleBridge``.

.. code:: ipython3
.. code:: python

# Build logger
print_log("Abductive Learning on the HWF example.", logger="current")


+ 14
- 14
docs/Examples/MNISTAdd.rst View File

@@ -21,7 +21,7 @@ and revise the initial digits yielded by the learning part through
abductive reasoning. This process enables us to further update the
machine learning model.

.. code:: ipython3
.. code:: python

# Import necessary libraries and modules
import os.path as osp
@@ -45,7 +45,7 @@ Working with Data

First, we get the training and testing datasets:

.. code:: ipython3
.. code:: python

train_data = get_dataset(train=True, get_pseudo_label=True)
test_data = get_dataset(train=False, get_pseudo_label=True)
@@ -62,7 +62,7 @@ of datasets are illustrated as follows.
``gt_pseudo_label`` is only used to evaluate the performance of
the learning part but not to train the model.

.. code:: ipython3
.. code:: python

print(f"Both train_data and test_data consist of 3 components: X, gt_pseudo_label, Y")
print("\n")
@@ -103,7 +103,7 @@ The ith element of X, gt_pseudo_label, and Y together constitute the ith
data example. As an illustration, in the first data example of the
training set, we have:

.. code:: ipython3
.. code:: python

X_0, gt_pseudo_label_0, Y_0 = train_X[0], train_gt_pseudo_label[0], train_Y[0]
print(f"X in the first data example (a list of two images):")
@@ -145,7 +145,7 @@ within a ``BasicNN`` object to create the base model. ``BasicNN`` is a
class that encapsulates a PyTorch model, transforming it into a base
model with a sklearn-style interface.

.. code:: ipython3
.. code:: python

cls = LeNet5(num_classes=10)
loss_fn = nn.CrossEntropyLoss(label_smoothing=0.1)
@@ -167,7 +167,7 @@ model with a sklearn-style interface.
are used to predict the class index and the probabilities of each class
for images. As shown below:

.. code:: ipython3
.. code:: python

data_instances = [torch.randn(1, 28, 28) for _ in range(32)]
pred_idx = base_model.predict(X=data_instances)
@@ -190,7 +190,7 @@ data (i.e., a pair of images). Therefore, we wrap the base model into
``ABLModel``, which enables the learning part to train, test, and
predict on example-level data.

.. code:: ipython3
.. code:: python

model = ABLModel(base_model)

@@ -200,7 +200,7 @@ method accepts data examples as input and outputs the class labels and
the probabilities of each class for all instances within these data
examples.

.. code:: ipython3
.. code:: python

from ablkit.data.structures import ListData
# ListData is a data structure provided by ABL Kit that can be used to organize data examples
@@ -241,7 +241,7 @@ initialize the ``pseudo_label_list`` parameter specifying list of
possible pseudo-labels, and override the ``logic_forward`` function
defining how to perform (deductive) reasoning.

.. code:: ipython3
.. code:: python

class AddKB(KBBase):
def __init__(self, pseudo_label_list=list(range(10))):
@@ -258,7 +258,7 @@ reasoning and abductive reasoning). Below is an example of performing
(deductive) reasoning, and users can refer to :ref:`Performing abductive
reasoning in the knowledge base <kb-abd>` for details of abductive reasoning.

.. code:: ipython3
.. code:: python

pseudo_labels = [1, 2]
reasoning_result = kb.logic_forward(pseudo_labels)
@@ -288,7 +288,7 @@ can minimize inconsistencies between the knowledge base and
pseudo-labels predicted by the learning part, and then return only one
candidate that has the highest consistency.

.. code:: ipython3
.. code:: python

reasoner = Reasoner(kb)

@@ -316,7 +316,7 @@ used to evaluate the accuracy of the machine learning model’s
predictions and the accuracy of the final reasoning results,
respectively.

.. code:: ipython3
.. code:: python

metric_list = [SymbolAccuracy(prefix="mnist_add"), ReasoningMetric(kb=kb, prefix="mnist_add")]

@@ -326,14 +326,14 @@ Bridging Learning and Reasoning
Now, the last step is to bridge the learning and reasoning part. We
proceed with this step by creating an instance of ``SimpleBridge``.

.. code:: ipython3
.. code:: python

bridge = SimpleBridge(model, reasoner, metric_list)

Perform training and testing by invoking the ``train`` and ``test``
methods of ``SimpleBridge``.

.. code:: ipython3
.. code:: python

# Build logger
print_log("Abductive Learning on the MNIST Addition example.", logger="current")


+ 12
- 12
docs/Examples/Zoo.rst View File

@@ -20,7 +20,7 @@ this happens, abductive reasoning can be employed to adjust these
results and retrain the model accordingly. This process enables us to
further update the learning model.

.. code:: ipython3
.. code:: python

# Import necessary libraries and modules
import os.path as osp
@@ -44,7 +44,7 @@ First, we load and preprocess the `Zoo
dataset <https://archive.ics.uci.edu/dataset/111/zoo>`__, and split it
into labeled/unlabeled/test data

.. code:: ipython3
.. code:: python

X, y = load_and_preprocess_dataset(dataset_id=62)
X_label, y_label, X_unlabel, y_unlabel, X_test, y_test = split_dataset(X, y, test_size=0.3)
@@ -55,7 +55,7 @@ the target is an integer value in the range [0,6] representing 7 classes
(e.g., mammal, bird, reptile, fish, amphibian, insect, and other). Below
is an illustration:

.. code:: ipython3
.. code:: python

print("Shape of X and y:", X.shape, y.shape)
print("First five elements of X:")
@@ -89,7 +89,7 @@ we treat the attributes as X and the targets as gt_pseudo_label (ground
truth pseudo-labels). Y (reasoning results) are expected to be 0,
indicating no rules are violated.

.. code:: ipython3
.. code:: python

label_data = tab_data_to_tuple(X_label, y_label, reasoning_result = 0)
data = tab_data_to_tuple(X_test, y_test, reasoning_result = 0)
@@ -103,7 +103,7 @@ base model. We use a `Random
Forest <https://en.wikipedia.org/wiki/Random_forest>`__ as the base
model.

.. code:: ipython3
.. code:: python

base_model = RandomForestClassifier()

@@ -112,7 +112,7 @@ can not directly deal with example-level data. Therefore, we wrap the
base model into ``ABLModel``, which enables the learning part to train,
test, and predict on example-level data.

.. code:: ipython3
.. code:: python

model = ABLModel(base_model)

@@ -125,7 +125,7 @@ information about the relations between attributes (X) and targets
base is built in the ``ZooKB`` class within file ``examples/zoo/kb.py``, and is
derived from the ``KBBase`` class.

.. code:: ipython3
.. code:: python

kb = ZooKB()

@@ -133,7 +133,7 @@ As mentioned, for all attributes and targets in the dataset, the
reasoning results are expected to be 0 since there should be no
violations of the established knowledge in real data. As shown below:

.. code:: ipython3
.. code:: python

for idx, (x, y_item) in enumerate(zip(X[:5], y[:5])):
print(f"Example {idx}: the attributes are: {x}, and the target is {y_item}.")
@@ -173,7 +173,7 @@ can minimize inconsistencies between the knowledge base and
pseudo-labels predicted by the learning part, and then return only one
candidate that has the highest consistency.

.. code:: ipython3
.. code:: python

def consitency(data_example, candidates, candidate_idxs, reasoning_results):
pred_prob = data_example.pred_prob
@@ -194,7 +194,7 @@ are used to evaluate the accuracy of the machine learning model’s
predictions and the accuracy of the final reasoning results,
respectively.

.. code:: ipython3
.. code:: python

metric_list = [SymbolAccuracy(prefix="zoo"), ReasoningMetric(kb=kb, prefix="zoo")]

@@ -204,14 +204,14 @@ Bridging Learning and Reasoning
Now, the last step is to bridge the learning and reasoning part. We
proceed with this step by creating an instance of ``SimpleBridge``.

.. code:: ipython3
.. code:: python

bridge = SimpleBridge(model, reasoner, metric_list)

Perform training and testing by invoking the ``train`` and ``test``
methods of ``SimpleBridge``.

.. code:: ipython3
.. code:: python

# Build logger
print_log("Abductive Learning on the Zoo example.", logger="current")


Loading…
Cancel
Save