You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

overview.md 9.2 kB

first commit Former-commit-id: 08bc23ba02cffbce3cf63962390a65459a132e48 [formerly 0795edd4834b9b7dc66db8d10d4cbaf42bbf82cb] [formerly b5010b42541add7e2ea2578bf2da537efc457757 [formerly a7ca09c2c34c4fc8b3d8e01fcfa08eeeb2cae99d]] [formerly 615058473a2177ca5b89e9edbb797f4c2a59c7e5 [formerly 743d8dfc6843c4c205051a8ab309fbb2116c895e] [formerly bb0ea98b1e14154ef464e2f7a16738705894e54b [formerly 960a69da74b81ef8093820e003f2d6c59a34974c]]] [formerly 2fa3be52c1b44665bc81a7cc7d4cea4bbf0d91d5 [formerly 2054589f0898627e0a17132fd9d4cc78efc91867] [formerly 3b53730e8a895e803dfdd6ca72bc05e17a4164c1 [formerly 8a2fa8ab7baf6686d21af1f322df46fd58c60e69]] [formerly 87d1e3a07a19d03c7d7c94d93ab4fa9f58dada7c [formerly f331916385a5afac1234854ee8d7f160f34b668f] [formerly 69fb3c78a483343f5071da4f7e2891b83a49dd18 [formerly 386086f05aa9487f65bce2ee54438acbdce57650]]]] Former-commit-id: a00aed8c934a6460c4d9ac902b9a74a3d6864697 [formerly 26fdeca29c2f07916d837883983ca2982056c78e] [formerly 0e3170d41a2f99ecf5c918183d361d4399d793bf [formerly 3c12ad4c88ac5192e0f5606ac0d88dd5bf8602dc]] [formerly d5894f84f2fd2e77a6913efdc5ae388cf1be0495 [formerly ad3e7bc670ff92c992730d29c9d3aa1598d844e8] [formerly 69fb3c78a483343f5071da4f7e2891b83a49dd18]] Former-commit-id: 3c19c9fae64f6106415fbc948a4dc613b9ee12f8 [formerly 467ddc0549c74bb007e8f01773bb6dc9103b417d] [formerly 5fa518345d958e2760e443b366883295de6d991c [formerly 3530e130b9fdb7280f638dbc2e785d2165ba82aa]] Former-commit-id: 9f5d473d42a435ec0d60149939d09be1acc25d92 [formerly be0b25c4ec2cde052a041baf0e11f774a158105d] Former-commit-id: 9eca71cb73ba9edccd70ac06a3b636b8d4093b04
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119
  1. # D3MDS
  2. D3M Data Supply (D3MDS) refers to the data infrastructure provided by MIT Lincoln Lab for the D3M program. The basic components of D3MDS includes the following: Datasets, Problems and Baseline solutions. The guiding principles for this design are as follows:
  3. 1. Keep the three entities, Dataset, Problem, and Baseline solutions, decoupled.
  4. 2. Keep a dataset self-contained.
  5. 2. Provide a uniform and consistent way of dealing with datasets, which could be full datasets, a train view of the dataset (during blind evaluation), or a test view of the dataset (during blind evaluation).
  6. 3. Allow the possibility of annotating a dataset with metadata information.
  7. 4. Dataset schema should be able to handle multiple relational tables.
  8. 3. Datasets and Problems should be decoupled.
  9. ## Dataset
  10. One of the core components of D3MDS are datasets. Each dataset is a self-contained set of data resources. These data resources can come in many types and formats. Some of the types of data resources that one can expect to see in D3MDS include: image, video, audio, speech, text, table, graph, timeseries, etc. Each type can be supplied in one or more formats. For example, image resources can come in PNG, JPEG, etc. formats. Our file structure convention for organizing a dataset is as follows:
  11. ```
  12. . <dataset_id>/
  13. |-- media/
  14. |-- text/
  15. |-- tables/
  16. |-- graphs/
  17. |-- datasetDoc.json
  18. ```
  19. Convention: The name of the root directory of a dataset is its dataset_id
  20. Suggested sub-directory names and structure:
  21. | | |
  22. |-----------------|-------------------------------------------------------------------------------------------------------------------|
  23. | <dataset_id>/ | the root directory |
  24. | media/ | (optional) directory containing media files, if any (e.g. images, video, audio, etc.) |
  25. | text/ | (optional) directory containing text documents, if any |
  26. | tables/ | (required) directory containing tabular data. This is not optional as a dataset will contain at least one table |
  27. | graphs/ | (optional) directory containing graphs, if any |
  28. | datasetDoc.json | (required) JSON document that describes all the data resources in the dataset (an instance of the dataset schema) |
  29. The datasetDoc.json file provides a description of all the elements of a dataset as a JSON document. This document is specified according to a predefined dataset schema. In other words, datasetDoc is an instance of datasetSchema.
  30. A small sample Dataset is shown in the figure below
  31. ![A sample dataset](static/sampleDataset.PNG)
  32. __Special "learningData" file__: All datasets will have a main data file, a dataset entry point, which can be considered as an entry point into the dataset. Its resource name is always 'learningData' and for backwards compatibility this file is always named 'learningData'. It's format is typically CSV (learningData.csv), but not necessarily the case. This file is treated as just another tabular resource and will be placed in the tables/ directory as shown below. It's columns and format will be annotated in the datasetDoc.json similar to any other table that may be part of the dataset. An example of learningDoc file can be seen in the sample dataset figure above.
  33. ```
  34. . <dataset_id>/
  35. |-- tables/
  36. |-- learningData.csv
  37. |-- datasetDoc.json
  38. ```
  39. ### datasetSchema
  40. * The datasetSchema is version controlled and can be found in [datasetSchema.json](../schemas/datasetSchema.json)
  41. * Full documentation for datasetSchema can be found in [datasetSchema.md](datasetSchema.md)
  42. ## Problems
  43. A dataset alone does not constitute a data science problem. A problem is developed over a dataset by defining a task, inputs for the task (which includes a dataset), and expected outputs for the task. Multiple problems can be developed for a single dataset. Our convention for organizing a problem is as follows:
  44. ```
  45. . <problem_id>/
  46. |-- dataSplits.csv
  47. |-- problemDoc.json
  48. ```
  49. Convention: The name of the root directory of a problem is its problem_id
  50. | | |
  51. |-----------------|--------------------------------------------------------------------------------------------------------------------------------|
  52. | <problem_id>/ | the root directory |
  53. |dataSplits.csv | (optional) data splits file which specifies the train/test split of the data when the split is defined manually |
  54. |problemDoc.json | (required) JSON document that describes the task, inputs and expected output of the problem (an instance of the problem schema)|
  55. A small sample Problem is shown in the figure below.
  56. ![A sample problem](static/sampleProblem.PNG)
  57. __Special "dataSplits" file__: When evaluation split has been done manually and not algorithmically, the dataSplits file contains information about which rows in learningData.csv are 'TRAIN' rows and which ones are 'TEST' rows. Normally (outside the context of the blind evaluation), performer systems can use this file to infer the trainData and testData splits. In the context of the blind evaluation, this file will be used to create two separate views of the dataset, the train view and test view, as described in the Dataset Views section below. An example of dataSplits file can be seen in the sample problem figure above.
  58. The dataSplits file also contains the repeat number and fold number if multiple repeats or folds are used in the problem definition. Some training datasets have different evaluation procedures(e.g., 10-fold CV, 3 repeats of 20% holdout, etc.). This approach of including repeat and fold information in dataSplits will handle all the different cases. __However, during blind evaluation, only holdout method is used without any repeats. Therefore, for evaluation problems, repeat and fold will always contain 0 values__
  59. ### problemSchema
  60. * The problemSchema is version controlled and can be found in [problemSchema.json](../schemas/problemSchema.json)
  61. * Full documentation for problemSchema can be found in [problemSchema.md](problemSchema.md)
  62. ## Baseline solutions
  63. A baseline solution is a solution to a problem. In its current conception, it consists of runnable code that takes as input a problem and produces an output (as specified in the problem schema). There can be multiple solution for a given problem. Our convention for organizing a python solution is as follows:
  64. ```
  65. . <solution_id>/
  66. |-- src/
  67. |-- run.py
  68. |-- predictions.csv
  69. ```
  70. | | |
  71. |-----------------|--------------------------------------------------------------------------------------------------------------------------------|
  72. | <solution_id>/ | the root directory |
  73. |src/ | source code for the pipeline |
  74. |run.py | runner script for running the solution |
  75. |predictions.csv | a file containing the predictions of the solution for the give problem |
  76. |scores.json | a file containing the performance scores of the solution |
  77. # Dataset Views
  78. For all the datasets that are released to the performers (e.g., training datasets), performers get a full view of the dataset as shown below:
  79. ![](static/sampleSupply.PNG)
  80. However, for the blind evaluation purposes, three separate views of the dataset will be created. These are no just logical views, but two separate physical views of the dataset. For each dataset and problem combination, dataset_TRAIJN, dataset_TEST, dataset_SCORE variants will be created. By convention they always have those suffixes.
  81. TEST view is the same as SCORE view, only that TEST view has all target values redacted.
  82. ![](static/allViews.PNG)
  83. Only the pertinent data resources from the full dataset will be replicated in the train and test views. For instance, the train view will be created by making a copy of a full dataset and deleting 'TEST' rows in learningData.csv and any other data resources that are referenced by those rows. The same is true for train view of the problem.
  84. ![](static/trainView.PNG)
  85. During blind evaluation, at test time, test view will be provided as shown below. Note that the labels in the learningDoc.csv have been removed for the test view.
  86. ![](static/testView.PNG)

全栈的自动化机器学习系统,主要针对多变量时间序列数据的异常检测。TODS提供了详尽的用于构建基于机器学习的异常检测系统的模块,它们包括:数据处理(data processing),时间序列处理( time series processing),特征分析(feature analysis),检测算法(detection algorithms),和强化模块( reinforcement module)。这些模块所提供的功能包括常见的数据预处理、时间序列数据的平滑或变换,从时域或频域中抽取特征、多种多样的检测算