diff --git a/abl/reasoning/readme.md b/abl/reasoning/readme.md deleted file mode 100644 index 3b94d9e..0000000 --- a/abl/reasoning/readme.md +++ /dev/null @@ -1,172 +0,0 @@ -# `kb.py` - -可以使用`kb.py`中实现的方法构建 KB (知识库). - -## KB 的建立 - -用户在建立一个 KB 时只需要指定两项: - -1. `pseudo_label_list`, 即伪标记 (机器学习部分的输出/逻辑推理部分的输入) 有哪些 - - > 例如, MNIST_add 传入的`pseudo_label_list`为 [0,1,2,...,9]. - -2. `logic_forward`, 即如何通过逻辑推理部分的输入得到输出 - > 例如, MNIST_add 中`logic_forward`为: "将两个 pseudo labels 相加得到结果" - -之后, KB 的其他功能 (如反绎等) 会自动构建. - -### 代码实现 - -在 Python 程序中, 可以通过建立一个 `KBBase` 的子类来建立 KB. - -> 例如, MNIST_add 的 KB (`kb1`) 的建立为: -> -> ```python -> class add_KB(KBBase): -> def __init__(self, pseudo_label_list=list(range(10))): -> super().__init__(pseudo_label_list) -> -> def logic_forward(self, pseudo_labels): -> return sum(pseudo_labels) -> -> kb1 = add_KB() -> ``` - -## GKB - -建立 KB 时, 用户可以在`__init__`中指定`GKB_flag`, 说明是否需要建立GKB (Ground Knowledge Base, 领域知识库). GKB 是一个 Python 字典, key 为`pseudo_label`组成的 list 代入`logic_forward`得到的所有可能的结果, 每个 key 对应的 value 为前述的`pseudo_label`组成的 list. 建立好 GKB 之后可以加快反绎的速度. - -### GKB 的建立 - -`GKB_flag`为`True`时, 为了建立 GKB, 用户还需要指定`len_list`, 用于说明 GKB 的每一个`pseudo_label`组成的 list 的长度. 在``__init__``建立 KB 的同时会根据 `pseudo_label_list`、`len_list`和`logic_forward`来自动建立 GKB. - - > 例如, MNIST_add 传入的`len_list`为 [2], 则建立的 GKB 为 {0:[[0,0]], 1:[[1,0],[0,1]], 2:[[0,2],[1,1],[2,0]], ..., 18:[[9,9]]} - -## 反绎 - -KB 的反绎功能在`abduce_candidates`中自动实现. 在调用`abduce_candidates`中需要传入以下参数: - -- `pred_res`: 机器学习的输出的伪标记 -- `y`: 逻辑推理的正确结果 -- `max_revision_num`: 最多修改的伪标记的个数 -- `require_more_revision`: 在已经反绎出结果之后, 指明是否需要继续增加修改伪标记的个数来继续得到更多的反绎结果. - -得到的输出为所有可能的反绎的结果. - -> 例如: 上文定义的`kb1` (MNIST_add 的 KB) 调用`kb1.abduce_candidates`得到的结果如下表: -> -> |`pred_res`|`y`|`max_revision_num`|`require_more_revision`|输出| -> |:---:|:---:|:---:|:---:|:----| -> |[1,1]|8|2|1|[[1,7],[7,1],[2,6],[6,2],[3,5],[5,3],[4,4]]| -> |[1,1]|8|2|0|[[1,7],[7,1]]| -> |[1,1]|8|1|1|[[1,7],[7,1]]| -> |[1,1]|17|1|1|[]| -> |[1,1]|17|2|0|[[8,9],[9,8]]| -> |[1,1]|20|2|0|[]| - -### 反绎的实现 - -当`GKB_flag`为`True`时会调用`_abduce_by_GKB`进行反绎, 否则会调用`_abduce_by_search`进行反绎. - -#### `_abduce_by_GKB` - -搜索 GKB 中是否存在 key 为`y`, 且满足`pred_res`, `max_revision_num`和`require_more_revision`组成的限制条件的反绎结果. - -> 比如, MNIST_add 中, 传入的`y`为 4, 此时在 GKB 中可以找到 [1,3], [3,1] 和 [2,2]. 如果此时传入的`pred_res`为 [2,8], `max_revision_num`为 2, `require_more_revision`为 0, 则输出的结果为 [2,2]. - -#### `_abduce_by_search` - -从 0 开始不断增加修改伪标记的个数, 通过枚举得到所有可能的修改后的伪标记, 直到达到`max_revision_num`或找到符合`logic_forward`定义的逻辑的结果. 接着, 如果`require_more_revision`不为0就继续增加修改伪标记的个数, 将符合的结果一起输出. - -> 比如, MNIST_add 中, 传入的`y`为 4, `pred_res`为 [2,8], `max_revision_num`为 2: 当修改 0 个伪标记时, 不能得到正确的结果. 当修改 1 个伪标记时, 可能的修改后逻辑输入为 [2,0],[2,1],[2,2],[2,3],[2,4],[2,5],[2,6],[2,7],[2,9], [0,8],[1,8],[3,8],[4,8],[5,8],[6,8],[7,8],[8,8],[9,8], 其中, [2,2] 符合逻辑的结果, 如果传入的`require_more_revision`是 0, 则最终输出的结果就是 [2,2], 否则, 继续增加修改的伪标记的个数并检验. - -_注: 如果使用 prolog 作为 KB, `_abduce_by_search`不会手动地枚举所有的可能修改后的伪标记, prolog 程序运行时会有一些剪枝等操作可以加速反绎. 关于这一部分, 将在下一小节`prolog_KB`中详述._ - -_注: 当使用`zoopt`或其他方式已经获得了需要修改的 index 时, 不需要调用整个`_abduce_by_search`的流程, 只需要调用其中的`revise_by_idx`即可. 关于这一部分, 将在`abducer_base.py`中详述._ - -## `prolog_KB` - -`prolog_KB` 是 `KBBase` 的子类, 当逻辑是以 prolog 程序的形式传入时, 可以直接向 `prolog_KB` 类传入`pseudo_label_list`和`pl_file` 即可构建 KB. - -_需要注意: 传入的 prolog 程序中需要有 `logic_forward` 的实现, 且 `logic_forward` 结果的变量名为 `Res`._ - -> 比如, MNIST_add 可以首先写好`add.pl`文件: -> -> ```prolog -> pseudo_label(N) :- between(0, 9, N). -> logic_forward([Z1, Z2], Res) :- pseudo_label(Z1), pseudo_label(Z2), Res is Z1+Z2. -> ``` -> -> 然后, 建立相应 KB 即可 -> -> ```python -> kb2 = prolog_KB(pseudo_label_list=list(range(10)), \ -> pl_file='add.pl') -> ``` - -同样地, KB 的其他功能 (如反绎等) 会自动构建. - -## 其他可选参数 - -建立 KB 时还可以传入下列参数: - -- `max_err` -- `use_cache` - -### `max_err` - -当逻辑推理部分的输出为数值时, 可以传入`max_err`, 使得调用`abduce_candidates`时, 只要满足与`y`的误差在`max_err`之间的结果都会被输出. - -> 例如: 上文定义的`kb1` (MNIST_add 的 KB), 当`pred_res`为 [2,2], `y`为 7, `max_revision_num`为 2, `require_more_revision`为 0 时, 如果`max_err`为 0, 则输出的结果为 [[2,5],[5,2]]; 如果`max_err`为 1, 则输出的结果为 [[2,4],[2,5],[2,6],[4,2],[5,2],[6,2]]. - -### `use_cache` - -当`use_cache`为`True`时, 会将每次调用`abduce_candidates`的结果缓存起来, 以便下次调用时直接返回. - -## 学习逻辑规则 - -在`HED_prolog_KB`里面有例子. - -# `abducer_base.py` - -可以使用`abducer_base.py`中实现的方法帮助进行反绎. 使用的方法为实例化 `AbducerBase` 类, 并向其中传入 `kb` 即可. - -> 比如, MNIST_add 中可以在定义 `kb1` 之后, 继续构建 -> -> ```python -> abd1 = AbducerBase(kb1) -> ``` - -`AbducerBase` 主要实现的函数是 `abduce`. 它的功能是有了数据之后得到**一个** **最有可能的**反绎的结果. 在调用`abduce`中需要传入以下参数: - -- `data`: 三元素组成的 tuple, 三个元素分别为 `pred_res`, `pred_res_prob`, `key`. 其中, `pred_res`和`key`的定义同`kb.py`中的`abduce_candidates`, 而 `pred_res_prob` 是机器学习输出的每个伪标记的置信度列表. -- `max_revision`: 最多修改的伪标记的数量, 可以以 float 或 int 的形式输入. 如果传入 float 最多修改的伪标记占所有伪标记的比重, 如果传入 int 为最多修改伪标记的个数 (此时同`kb.py`中`abduce_candidates`中`max_revision_num`的定义) -- `require_more_revision`: 定义同`kb.py`中的 `abduce_candidates`同一参数. - -输出为一个反绎的结果. - -## `abduce` 的实现 - -在实例化`AbducerBase`时可以传入`zoopt`参数, 决定在反绎过程中是否使用零阶优化找到修改的伪标记的 index. - -- 当`zoopt`为`False`时, 不使用零阶优化, 直接使用`kb.py`中的`abduce_candidates`找到所有可能的反绎结果, 然后使用`_get_one_candidate`找到最有可能的反绎结果. - -- 当`zoopt`为`True`时, 使用零阶优化找到修改的伪标记的 index, 然后使用`kb.py`中的`revise_by_idx`找到反绎结果, 最后使用`_get_one_candidate`找到最有可能的反绎结果. - > 比如, MNIST_add 中的`pred_res`为 [2,9], `key`为 18, 首先使用零阶优化会得到应该修改的伪标记的 index 为 0, 接着代入`pred_res`, `key` 和修改的 index([0]) 到`kb.py`的`revise_by_idx`中, 在其中会先得到修改 index 处的伪标记后所有可能的逻辑输入为 [0,9],[1,9],[3,9],[4,9],[5,9],[6,9],[7,9],[8,9],[9,9], 其中 [9,9] 是符合逻辑的, 则输出 [9,9]. - > - > 再比如, HED 中`pred_res`为[1,0,1,'=',1], (`key`默认设置为`None`). 首先使用零阶优化会得到应该修改的伪标记的 index 为 1, 代入到`kb.py`的`revise_by_idx`可以得到修改 index 处的伪标记后所有可能的逻辑输入为 [1,1,1,'=',1],[1,'+',1,'=',1],[1,'=',1,'=',1], 其中 [1,'+',1,'=',1] 是符合逻辑的, 则输出 [1,'+',1,'=',1]. - - _注:零阶优化中同样支持`max_revision`的限制, 但是暂时没有设置支持`require_more_revision`._ - -## 何为“最有可能的” - -在实例化`AbducerBase`时需要传入`dist_func`参数, 从而使得当返回反绎结果时, 如何选择最有可能的一个输出. `dist_func` 可以选择的类型有: - -- `hamming`: 用反绎后的结果与`pred_res`之间的汉明距离作为度量, 输出距离最小的反绎结果. -- `confidence`: 用反绎后的结果与`pred_res_prob`之间的距离作为度量, 输出距离最小的反绎结果. - - > 比如, MNIST_add 中的`pred_res`为 [1,1], `key` 为 8, `max_revision`为 1, 如果`pred_res_prob`为 [[0, 0.99, 0.01, 0, 0, 0, 0, 0, 0, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]], 则输出的结果为 [1,7], 如果`pred_res_prob`为 [[0, 0, 0.01, 0, 0, 0, 0, 0.99, 0, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]], 则输出的结果为 [7,1]. - -## 批量化反绎 - -可以使用`batch_abduce`同时传入一批数据进行反绎, 如上文定义的 `abd1`, 调用`abd1.batch_abduce({'cls':[[1,1], [1,2]], 'prob':multiple_prob}, [4,8], max_revision=2, require_more_revision=0)`时, 返回的结果为 [[1,3], [6,2]]. diff --git a/abl/reasoning/readme_en.md b/abl/reasoning/readme_en.md deleted file mode 100644 index f6fdf38..0000000 --- a/abl/reasoning/readme_en.md +++ /dev/null @@ -1,164 +0,0 @@ -# `kb.py` - -You can use the methods implemented in `kb.py` to construct a KB (knowledge base). - -## KB Construction - -When constructing a KB, users only need to specify two items: - -1. `pseudo_label_list`, the output of the machine learning part or the input of the logical reasoning part. - - > For example, the `pseudo_label_list` passed in MNIST_add is [0,1,2,...,9]. - -2. `logic_forward`, how to get the output through the input of the logical reasoning part. - > For example, the `logic_forward` in MNIST_add is: "Add two pseudo labels to get the result." - -After that, other functions of KB (such as abduction, etc.) will be automatically constructed. - -### Code Implementation - -In a Python program, you can build a KB by creating a subclass of `KBBase`. - -> For example, the construction of the KB (`kb1`) of MNIST_add is: -> -> ```python -> class add_KB(KBBase): -> def __init__(self, pseudo_label_list=list(range(10))): -> super().__init__(pseudo_label_list) -> -> def logic_forward(self, pseudo_labels): -> return sum(pseudo_labels) -> -> kb1 = add_KB() -> ``` - -## GKB - -When building a KB, users can specify `GKB_flag` in `__init__` to indicate whether to build GKB (Ground Knowledge Base, domain knowledge base). GKB is a Python dictionary. The key is a list composed of `pseudo_label` input into `logic_forward`, and the value corresponding to each key is a list composed of the aforementioned `pseudo_label`. After the GKB is built, it can speed up the time required for abduction. - -#### GKB Construction - -When `GKB_flag` is `True`, in order to build GKB, users also need to specify `len_list`, which is used to indicate the length of each list composed of `pseudo_label` in GKB. At the same time as `__init__` constructs KB, GKB will be automatically constructed according to `pseudo_label_list`, `len_list`, and `logic_forward`. - - > For example, the `len_list` passed in MNIST_add is [2], and the GKB constructed is {0:[[0,0]], 1:[[1,0],[0,1]], 2:[[0,2],[1,1],[2,0]], ..., 18:[[9,9]]} - -## Abduction - -The abduction function of KB is automatically implemented in `abduce_candidates`. The following parameters need to be passed in when calling `abduce_candidates`: - -- `pred_res`: the pseudo label output by machine learning -- `key`: the correct result of logical reasoning -- `max_address_num`: the maximum number of modified pseudo labels -- `require_more_address`: indicate whether to continue to increase the number of modified pseudo labels to obtain more abduction results after the result has been obtained. - -The output is all possible abduction results. - -> For example: The result obtained by calling `kb1.abduce_candidates` of the `kb1` (KB of MNIST_add) is as follows: -> -> |`pred_res`|`key`|`max_address_num`|`require_more_address`|Output| -> |:---:|:---:|:---:|:---:|:----| -> |[1,1]|8|2|0|[[1,7],[7,1]]| -> |[1,1]|8|2|1|[[1,7],[7,1],[2,6],[6,2],[3,5],[5,3],[4,4]]| -> |[1,1]|8|1|1|[[1,7],[7,1]]| -> |[1,1]|17|1|0|[]| -> |[1,1]|17|1|1|[[8,9],[9,8]]| -> |[1,1]|17|2|0|[[8,9],[9,8]]| -> |[1,1]|20|2|0|[]| - -### Implementation of Abduction - -When building KB, `GKB_flag` can be used to automatically implement abduction. If `GKB_flag` is `True`, `_abduce_by_GKB` will be called to implement abduction. Otherwise, `_abduce_by_search` will be called to implement abduction. - -#### `_abduce_by_GKB` - -Search whether there is a result of abduction in GKB that meets the restriction conditions composed of `pred_res`, `max_address_num`, and `require_more_address` and has a key of `key`. - -> For example, in MNIST_add, when the `key` is 4, [1,3], [3,1], and [2,2] can be found in GKB. If the `pred_res` passed in at this time is [2,8], `max_address_num` is 2, and `require_more_address` is 0, the output result is [2,2]. - -#### `_abduce_by_search` - -Starting from 0, continuously increase the number of modified pseudo labels, and enumerate all possible modified pseudo labels until the `max_address_num` is reached or the logic defined by `logic_forward` is found. Then, if `require_more_address` is not 0, continue to increase the number of modified pseudo labels and output the matching results together. - -> For example, in MNIST_add, when the `key` is 4, the `pred_res` is [2,8], and the `max_address_num` is 2: when 0 pseudo labels are modified, the correct result cannot be obtained. When 1 pseudo label is modified, the possible modified logical inputs are [2,0],[2,1],[2,2],[2,3],[2,4],[2,5],[2,6],[2,7],[2,9],[0,8],[1,8],[3,8],[4,8],[5,8],[6,8],[7,8],[8,8],[9,8], among which [2,2] is the result that meets the logic. If `require_more_address` is 0, the final output result is [2,2], otherwise, continue to increase the number of modified pseudo labels and verify. - -_Note: When the index of the pseudo label to be modified has been obtained using `zoopt` or other methods, it is not necessary to call the entire process of `_abduce_by_search`, only `address_by_idx` needs to be called. This part will be described in detail in `abducer_base.py`._ - -## `prolog_KB` - -When the logic is passed in the form of a prolog program, `prolog_KB` can be directly passed to the class by specifying `pseudo_label_list` and `pl_file`. - -_Note: The prolog program passed in needs to have the implementation of `logic_forward`, and the variable name of the result of `logic_forward` is `Res`._ - -> For example, MNIST_add can first write the `add.pl` file: -> -> ```prolog -> pseudo_label(N) :- between(0, 9, N). -> logic_forward([Z1, Z2], Res) :- pseudo_label(Z1), pseudo_label(Z2), Res is Z1+Z2. -> ``` -> -> Then, the corresponding KB can be constructed -> -> ```python -> kb2 = prolog_KB(pseudo_label_list=list(range(10)), \ -> pl_file='add.pl') -> ``` - -Similarly, other functions of KB (such as abduction, etc.) will be automatically constructed. - -## Other Optional Parameters - -The following parameters can also be passed in when building KB: - -- `max_err` -- `use_cache` - -### `max_err` - -When the output of the logical reasoning part is a numerical value, `max_err` can be passed in so that when calling `abduce_candidates`, all results whose error with `key` is between `max_err` will be output. - -> For example: The `pred_res` passed in MNIST_add is [2,2], the `key` is 7, the `max_address_num` is 2, and the `require_more_address` is 0. If `max_err` is 0, the output result is [[2,5],[5,2]]; if `max_err` is 1, the output result is [[2,4],[2,5],[2,6],[4,2],[5,2],[6,2]]. - -### `use_cache` - -When `use_cache` is `True`, the result of each call to `abduce_candidates` will be cached so that it can be returned directly the next time it is called. - -# `abducer_base.py` - -You can use the methods implemented in `abducer_base.py` to help with abduction. The method is to instantiate the `AbducerBase` class and pass in `kb`. - -> For example, in MNIST_add, after defining `kb1`, continue to construct -> -> ```python -> abd1 = AbducerBase(kb1) -> ``` - -The main function implemented by `AbducerBase` is `abduce`. Its function is to obtain **one** **most likely** result after the data is available. When calling `abduce`, the following parameters need to be passed in: - -- `data`: a tuple composed of three elements, `pred_res`, `pred_res_prob`, and `key`. Among them, the definitions of `pred_res` and `key` are the same as those in `abduce_candidates` in `kb.py`, and `pred_res_prob` is the confidence list of each pseudo label output by machine learning. -- `max_address`: the maximum number of modified pseudo labels, which can be input as a float or an int. If a float is passed in, the maximum number of modified pseudo labels accounts for the proportion of all pseudo labels. If an int is passed in, it is the maximum number of modified pseudo labels (the same as the definition of `max_address_num` in `abduce_candidates` in `kb.py). -- `require_more_address`: the same definition as in `abduce_candidates` in `kb.py`. - -The output is a result of abduction. - -## Implementation of `abduce` - -When instantiating `AbducerBase`, `zoopt` can be passed in to decide whether to use zero-order optimization to find the index of the modified pseudo label during abduction. - -- When `zoopt` is `False`, zero-order optimization is not used, and `abduce_candidates` in `kb.py` is directly used to find all possible abduction results, and then `_get_one_candidate` is used to find the most likely result. -- When `zoopt` is `True`, zero-order optimization is used to find the index of the modified pseudo label, and then `address_by_idx` in `kb.py` is used to find the result of abduction. Finally, `_get_one_candidate` is used to find the most likely result. - > For example, in MNIST_add, when `pred_res` is [2,9], `key` is 18, first use zero-order optimization to get the index of the pseudo label to be modified as 0, then substitute `pred_res`, `key`, and the modified index ([0]) into `address_by_idx` in `kb.py`. In it, all possible logical inputs after modifying the index are [0,9],[1,9],[3,9],[4,9],[5,9],[6,9],[7,9],[8,9],[9,9], among which [9,9] is the one that meets the logic, so the output is [9,9]. - > - > Another example is HED, where `pred_res` is [1,0,1,'=',1] (`key` is set to `None` by default). First, use zero-order optimization to get the index of the pseudo label to be modified as 1, and then substitute it into `address_by_idx` in `kb.py`. In it, all possible logical inputs after modifying the index are [1,1,1,'=',1],[1,'+',1,'=',1],[1,'=',1,'=',1], among which [1,'+',1,'=',1] is the one that meets the logic, so the output is [1,'+',1,'=',1]. - -## What is "most likely" - -When instantiating `AbducerBase`, `dist_func` can be passed in to indicate how to choose the most likely output when returning multiple results of abduction. The types that `dist_func` can choose are: - -- `hamming`: Use the Hamming distance between the results after abduction and `pred_res` as the metric, and output the result with the minimum distance. -- `confidence`: Use the distance between the results after abduction and `pred_res_prob` as the metric, and output the result with the minimum distance. - - > For example, in MNIST_add, when `pred_res` is [1,1], `key` is 8, and `max_address` is 1, if `pred_res_prob` is [[0, 0.99, 0.01, 0, 0, 0, 0, 0, 0, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]], the output result is [1,7]; if `pred_res_prob` is [[0, 0, 0.01, 0, 0, 0, 0, 0.99, 0, 0], [0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]], the output result is [7,1]. - -## Batch Abduction - -`batch_abduce` can be used to perform abduction on a batch of data at the same time. For example, the `abd1.batch_abduce({'cls':[[1,1], [1,2]], 'prob':multiple_prob}, [4,8], max_address=2, require_more_address=0)` is called, and the output result is [[1,3], [6,2]].