diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..1cf7dde6 --- /dev/null +++ b/404.html @@ -0,0 +1,418 @@ + + + +
+ + + + + + + + + + + + + + +Atomspace is the hypergraph OpenCog Hyperon uses to represent and store +knowledge, being the source of knowledge for AI agents and the container of any +computational result that might be created or achieved during their execution.
+The Distributed Atomspace (DAS) is an extension of OpenCog Hyperon's +Atomspace into a more independent component designed to support multiple +simultaneous connections with different AI algorithms, providing a flexible +query interface to distributed knowledge bases. It can be used as a component +(e.g. a Python library) or as a stand-alone server to store essentially +arbitrarily large knowledge bases and provide means for the agents to traverse +regions of the hypergraphs and perform global queries involving properties, +connectivity, subgraph topology, etc.
+DAS can be understood as a persistence layer for knowledge bases used in +OpenCog Hyperon.
++ +
+ +The data manipulation API provides a defined set of operations without exposing +database details such as data modeling and the DBMS (Database Management +System) being used. This is important because it allows us to evolve the data +model inside DAS and even change the DBMS without affecting the integration +with the AI agents.
+But being an abstraction for the data model is not the only purpose of DAS. +While performing this connection between AI agents and the knowledge bases, DAS +provides a lot of other functionalities:
+This is why DAS is not just a Data Access Object or a database interface layer +but rather a more complex OpenCog Hyperon's component that abstracts not only +data modeling/access itself but also several other algorithms that are closely +related to the way AI agents manipulate information.
+DAS is delivered as a Python library +hyperon-das which can be used in two +different ways:
++ +
+ +Components in the DAS architecture are designed to provide the same +data manipulation API +regardless of whether it's being used locally or remotely or, in the case of +a local DAS, whether DB persistence is being used or not.
+Part of this API is delegated to Traverse Engine, which interacts with the +Query Engine and the Cache to provide means to the user to traverse the +Atomspace hypergraph. Operations like finding the links pointing from/to a +given atom or finding atoms in the surrounding neighborhood are performed by +this engine, which controls the pre-fetching of the surrounding atoms when a +remote DAS is being used, in such a way that following links can be done +quickly.
+The Query Engine is where global queries are processed. These are queries +for specific atoms or sets of atoms that satisfies some criteria, including +pattern matching. When making a query, the user can specify whether only local +atoms should be considered or whether atoms in remote DASs should be searched +as well. If that's the case, the Query Engine connects to the remote OpenFaaS +servers to make the queries in the remote DASs and return a answer which is a +proper combination of local and remote information. For instance, if there're +different versions of the same atom in local and one of the remote DASs, the +local version is returned.
+Both engines use the Cache in order to make queries involving a remote DAS +faster. The DAS' cache is not exactly like a traditional cache, where data is +stored basically in the same way in both, the cache and the primary data +repository, and queries are answered by searching the data in the former and +then in the latter. The DAS's cache implements this functionality but it also +sorts and partitions queries' results in such a way that the caller sees the +most relevant results first.
+All the queries that return more than one atom, return an iterator to the +results instead of the results themselves. This way only a subset of the +results are returned in a remote query. When the caller iterates through this +iterator, other chunks of results are fetched on demand from the remote DAS +until all the results have been visited. Before splitting the results in +chunks, the resulting atoms are sorted by "relevance", which can be a measure +based in atoms' Short and Long Term Importance (STI and LTI), in a way that the +most relevant results are iterated first. This is important because most AI +agents make several queries and visit the results in a combinatorial fashion so +visiting every single possible combination of results are not practical. Having +results sorted by relevance allow the agents to constraint the search and +eventually avoid fetching too many chunks of results from the remote server.
+The AtomDB is somehow like a Data Access Object or a database interface +layer to abstract the calls to the database where atoms are actually stored. +Having this abstraction is important because it allows us to change or to +extend the actual data storage without affecting the query algorithms (such as +pattern matching) implemented in traverse and query engines. AtomDB can be +backended by in-RAM data structures or one or more DBMSs.
+DAS uses a DBMS to store atoms. By doing so it uses the indexing capabilities +of this DBMS to retrieve atoms faster. But in addition to this, DAS also +creates other custom indexes and stores these indexes in another DBMS. The most +relevant of such indexes is the Pattern Inverted Index.
+An inverted index is a data structure which stores a map from contents (words, +sentences, numbers, etc) to where they can be found in a given data container +(database, file system etc).
+This type of data structure is largely used in document retrieval systems to +implement efficient search engines. The idea is spending computational time +when documents are inserted in the document base to index and record the words +that appear in each document (and possibly the position they happen inside the +documents). Afterwards this index can be used by the search engine to +efficiently locate documents that contain a given set of keywords.
+The entities in the Opencog Hyperon's context are different from the ones in +typical document retrieval systems but their roles and the general idea of the +algorithms are very similar. In OpenCog Hyperon's context, a knowledge base is +a set of toplevel links (which may point to nodes or to other links). When +the knowledge base is loaded, we can create an inverted index of patterns +present in each toplevel link and use such index later to perform pattern +matching.
+For instance, given as toplevel link like this one:
+Inherits
+ <Concept A>
+ <Concept B>
+
+We could add entries like these ones in the Pattern Inverted Index (where H1
+is the handle of the toplevel link above):
Inherits * <Concept B> ==> H1
+Inherits <Concept A> * ==> H1
+Inherits * * ==> H1
+
+DAS' query engine can answer pattern matching queries. These are queries where +the caller specifies a pattern i.e. a boolean expression of subgraphs with +nodes, links and wildcards and the engine finds every subgraph in the knowledge +base that satisfies the passed expression.
+For instance, suppose we have the following knowledge base in DAS.
++ +
+ +We could search for a pattern like:
+AND
+ Similar(V1, V2)
+ NOT
+ AND
+ IS_A(V1, V3)
+ IS_A(V2, V3)
+
+V1
, V2
and V3
are wildcards or variables. In any candidate subgraph
+answer, the atom replacing V1
, for instance, should be the same in all the
+links where V1
appears. In other words, with this pattern we are searching
+for two nodes V1
and V2
such that there exist a similarity link between
+them but there's no pair of inheritance links pointing V1
and V2
to the
+same node V3
, no matter the value of V3
.
In this example, Chimp
and Human
are not a suitable answer to replace V1
+and V2
because there's a possible value for V3
that satisfies the AND
+clause in the pattern, as shown below.
+ +
+ +On the other hand, there are other pair of nodes which could be used to match
+V1
and V2
whitout matching the AND
clause, as shown below.
+ +
+ +The answer for the query is all the subgraphs that satisfy the pattern. In our +example, the answer would be as follows.
++ +
+ +Before loading a knowledge base into DAS, you need to define a proper mapping +to Atomspace nodes and links. DAS doesn't make any assumptions regarding nodes +or link types, arity etc. When adding nodes and links using DAS' API, one may +specify atom types freely and the semantic meaning of such atom types are +totally concerned with the application. DAS don't make any kind of processing +based in pre-defined types (actually, there are no internally pre-defined atom +types).
+DAS also doesn't provide a way to read a text or SQL or whatever type of file in +order to load a knowledge base. There's no DAS-defined file syntax for this. +If one needs to import a knowledge base, it needs to provide a proper loader +application to parse the input file(s) and make the proper calls to DAS' API in +order to add nodes and links.
+Surely one of the interesting topics for future/on-going work on DAS is to +provide loaders (and respective nodes/links mapping) for different types of +knowledge base formats like SQL, Atomese, etc. We already have such a +loader for MeTTa files.
+DAS server is deployed in a Lambda Architecture +based either in OpenFaaS or AWS Lambda. +We made a comparative study of these two architectures (results are presented in +this report) +and decided to prioritize OpenFaaS. +Although deployment in AWS Lambda is still possible, currently only OpenFaaS is +supported by our automated deployment tool. +This architecture is presented in the diagram below.
++ +
+ +When deploying in AWS Lambda, Redis and +MongoDB can be replaced by AWS' +DocumentDB and +ElastiCache but the overall +structure is basically the same.
+Functions are deployed in servers in the cloud as +Docker containers, built in our CI/CD pipeline +by automated GitHub Actions scripts and +stored in a private Docker hub registry.
+Clients can connect using HTTP, gRPC or an external lambda functions (OpenFaaS +functions can only connect to OpenFaaS and the same is true for AWS functions).
+DAS is versioned and released as a library in PyPI.
+ + + + + + + + + + + + + +To publish a new version of DAS AtomDB, the first step is to access the +AtomDB repository.
+ +Before starting to publish the version, it is crucial to ensure that the +pyproject.toml file is updated with the number of the desired new +version, locating and changing the version parameter in the +[tool.poetry] section.
+ +After this change, it is necessary to commit to the master branch to +record the change.
+ +It is important to note what the last version created was at https://github.com/singnet/das-atom-db/tags before creating a new version.
+After the workflow execution, refresh the page and check if a new +workflow is running. By clicking on it, you can track all jobs. At the +end of the process, all jobs should have a green check mark. If there is +an error in any job, it is possible to click on it to view the logs and +identify the cause of the problem.
+ +If everything goes as expected, the new version tag should be available +at +https://github.com/singnet/das-atom-db/tags +and +https://pypi.org/project/hyperon-das-atomdb/#history.
+To publish a new version of DAS Query Engine, follow a process similarto the one described above for Das AtomDB. Access the repository athttps://github.com/singnet/das-query-engine
+ +Make sure to update the version number in the pyproject.toml file. Additionally, it is necessary to update the version of hyperon-das-atomdb in the dependencies, as specified in the [tool.poetry.dependencies] section.
+ +After this change, it is necessary to commit to the master branch to record the change.
+ +It is important to note what the last version created was at https://github.com/singnet/das-query-engine/tags before creating a new version.
+Initiate the 'Publish to PyPI' Workflow Manually via the 'Actions' Tab in the Repository. Click 'Run workflow' and proceed with the provided instructions, ensuring the master branch is selected. Enter the desired version number in the format 1.0.0, then click 'Run workflow' to proceed.
+ +Just like in the case of DAS AtomDB, refresh the page and check if a new workflow is running. By clicking on it, you can track all jobs. At the end of the process, all jobs should have a green check mark. If there is an error in any job, it is possible to click on it to view the logs and identify the cause of the problem.
+If everything goes as expected, the new version tag should be available at https://github.com/singnet/das-query-engine/tags +and https://pypi.org/project/hyperon-das/#history.
+Update the version of the hyperon-das in the das-query-engine/requirements.txt file. This ensures that the correct version is used during the workflow build.
+ +After this change, it is necessary to commit to the master branch to record the change.
+ +It is important to note what the last version created was at https://github.com/singnet/das-serverless-functions/tags before creating a new version.
+Manually trigger the 'Vultr Build' workflow via the 'Actions' tab in the repository. Ensure the master branch is selected, then input the desired version number following the format 1.0.0. Next, choose 'das-query-engine' from the dropdown menu, and finally, click 'Run workflow' to proceed.
+ +After the workflow execution, refresh the page and check if a new workflow is running. By clicking on it, you can track all jobs. At the end of the process, all jobs should have a green check mark. If there is an error in any job, it is possible to click on it to view the logs and identify the cause of the problem.
+ +It is important to note that this pipeline should generate an img on Docker Hub, following the format 1.0.0-queryengine. Make sure that the img is generated correctly and available at https://hub.docker.com/r/trueagi/das/tags. After the workflow execution, verify if all jobs were successfully completed. The new version tag should be available at https://github.com/singnet/das-serverless-functions/tags.
+The publication process of the img generated in the production and development environments is carried out in the das-infra-stack-vultr repository.
+ +Before starting the deployment, it is necessary to update the version of hyperon-das in the requirements.txt file, ensuring that the correct version is used during integration tests. Before committing the changes to a branch, make the necessary changes in the das-function.yml file, updating the image version to the one generated earlier.
+ + +Commit your changes to the 'develop' branch or merge them into the 'develop' branch for deployment to the development environment. Following the merge, the 'Vultr Deployment' pipeline will initiate automatically. Verify the successful completion of all jobs with the 'develop' suffix to ensure the development environment is accurately updated.
+ +After verification, make a PR from develop to master. After the merge +to master, check if all jobs were successfully completed, ensuring +that the production environment is correctly updated. If errors occur +during tests, they are likely related to the response format, which +may have been changed due to previously published libraries. In case +of problems, it is possible to rollback the version by reverting the +commit to return to the previous version.
+ +To publish a new version of DAS Metta Parser, access the repository athttps://github.com/singnet/das-metta-parser
+ +It is important to note what the last version created was at https://github.com/singnet/das-metta-parser/tags before creating a new version.
+Initiate the 'DAS Metta Parser Build' Workflow Manually via the 'Actions' Tab in the Repository. Click 'Run workflow' and proceed with the provided instructions, ensuring the master branch is selected. Enter the desired version number in the format 1.0.0, then click 'Run workflow' to proceed.
+ +Refresh the page and check if a new workflow is running. By clicking on it, you can track all jobs. At the end of the process, all jobs should have a green check mark. If there is an error in any job, it is possible to click on it to view the logs and identify the cause of the problem.
+ +It is important to note that this pipeline should generate an image on Docker Hub, following the format 1.0.0-toolbox. Make sure that the image is generated correctly and available at https://hub.docker.com/r/trueagi/das/tags. After the workflow execution, verify if all jobs were successfully completed. The new version tag should be available at https://github.com/singnet/das-metta-parser/tags.
+To publish a new version of DAS Toolbox, access the repository at https://github.com/singnet/das-toolbox/.
+ +Ensure to update the toolbox image version number in the src/config/config.py file. This is important because syntax check and loader are executed from this toolbox image.
+ +After this change, it is necessary to commit to the master branch to record the change.
+It is important to note what the last version created was at https://github.com/singnet/das-toolbox/tags before creating a new version.
+Manually execute the “DAS CLI Build” workflow through the “Actions” tab in the repository. Click 'Run workflow' and proceed with the provided instructions, ensuring the master branch is selected. Enter the desired version number in the format 1.0.0, then click 'Run workflow' to proceed.
+ +After the workflow execution, refresh the page and check if a new workflow is running. By clicking on it, you can track all jobs. At the end of the process, all jobs should have a green check mark. If there is an error in any job, it is possible to click on it to view the logs and identify the cause of the problem.
+ +After the workflow execution, verify if all jobs were successfully completed. The new version tag should be available at https://github.com/singnet/das-toolbox/tags. Additionally, the CLI file generated by the pipeline will be available for download in the workflow artifacts, allowing its use locally.
+ + + + + + + + + + + + + + +DistributedAtomSpace
+
+
+add_link(link_params)
+
+Adds a link to DAS.
+A link is represented by a Python dict which may contain any number of keys associated to +values of any type (including lists, sets, nested dicts, etc) , which are all +recorded with the link, but must contain at least the keys "type" and "targets". +"type" shpould map to a string and "targets" to a list of Python dict, each of them being +itself a representation of either a node or a nested link. "type" and "targets" define the +link uniquely, i.e. two links with the same "type" and "targets" are considered to be the +same entity.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
link_params |
+
+ Dict[str, Any]
+ |
+
+
+
+ A dictionary with link data. The following keys are mandatory: +- 'type': The type of the link. +- 'targets': A list of target elements. + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ Dict[str, Any]
+ |
+
+
+
+ Dict[str, Any]: The information about the added link, including its unique handle and + |
+
+ Dict[str, Any]
+ |
+
+
+
+ other fields used internally in DAS. + |
+
Raises:
+Type | +Description | +
---|---|
+ AddLinkException
+ |
+
+
+
+ If the 'type' or 'targets' fields are missing or invalid somehow. + |
+
Examples:
+>>> link_params = {
+ 'type': 'Evaluation',
+ 'targets': [
+ {'type': 'Predicate', 'name': 'Predicate:has_name'},
+ {
+ 'type': 'Set',
+ 'targets': [
+ {'type': 'Reactome', 'name': 'Reactome:R-HSA-164843'},
+ {'type': 'Concept', 'name': 'Concept:2-LTR circle formation'},
+ ],
+ },
+ ],
+ }
+>>> das.add_link(link_params)
+
add_node(node_params)
+
+Adds a node to DAS.
+A node is represented by a Python dict which may contain any number of keys associated to +values of any type (including lists, sets, nested dicts, etc) , which are all +recorded with the node, but must contain at least the keys "type" and "name" +mapping to strings which define the node uniquely, i.e. two nodes with the same +"type" and "name" are considered to be the same entity.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
node_params |
+
+ Dict[str, Any]
+ |
+
+
+
+ A dictionary with node data. The following keys are mandatory: +- 'type': Node type +- 'name': Node name + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ Dict[str, Any]
+ |
+
+
+
+ Dict[str, Any]: The information about the added node, including its unique handle and + |
+
+ Dict[str, Any]
+ |
+
+
+
+ other fields used internally in DAS. + |
+
Raises:
+Type | +Description | +
---|---|
+ AddNodeException
+ |
+
+
+
+ If 'type' or 'name' fields are missing or invalid somehow. + |
+
Examples:
+>>> node_params = {
+ 'type': 'Reactome',
+ 'name': 'Reactome:R-HSA-164843',
+ }
+>>> das.add_node(node_params)
+
clear()
+
+Delete all atoms and custom indexes.
+ +commit_changes(**kwargs)
+
+Commit changes (atom addition/deletion/change) to the databases or to +the remote DAS Server, depending on the type of DAS being used.
+The behavior of this method depends on the type of DAS being used.
+When called in a DAS instantiated with query_engine=remote
+This is called a "Remote DAS" in the documentation. Remote DAS is +connected to a remote DAS Server which is used to make queries, +traversing, etc but it also keeps a local Atomspace in RAM which is +used as a cache. Atom changes are made initially in this local cache. +When commit_changes() is called in this type of DAS, these changes are +propagated to the remote DAS Server.
+When called in a DAS instantiated with query_engine=local and + atomdb='ram'.
+No effect.
+When called in a DAS instantiated with query_engine=local and + atomdb='redis_mongo'
+The AtomDB keeps buffers of changes which are not actually written in the +DBs until commit_changes() is called (or until that buffers size reach a +threshold).
+count_atoms()
+
+Count nodes and links in DAS.
+In the case of remote DAS, count the total number of nodes and links stored locally and +remotelly. If there are more than one instance of the same atom (local and remote), it's +counted only once.
+ + + +Returns:
+Type | +Description | +
---|---|
+ Tuple[int, int]
+ |
+
+
+
+ Tuple[int, int]: (node_count, link_count) + |
+
create_field_index(atom_type, field, type=None, composite_type=None)
+
+Create a custom index on the passed field of all atoms of the passed type.
+Remote DAS allow creation of custom indexes based on custom fields in +nodes or links. These indexes can be used to make subsequent custom queries.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
atom_type |
+
+ str
+ |
+
+
+
+ Either 'link' or 'node', if the index is to be created for +links or nodes. + |
+ + required + | +
field |
+
+ str
+ |
+
+
+
+ field where the index will be created upon + |
+ + required + | +
type |
+
+ str
+ |
+
+
+
+ Only atoms of the passed type will be indexed. Defaults +to None, meaning that atom type doesn't matter. + |
+
+ None
+ |
+
composite_type |
+
+ List[Any]
+ |
+
+
+
+ Only Atoms type of the passed composite +type will be indexed. Defaults to None. + |
+
+ None
+ |
+
Raises:
+Type | +Description | +
---|---|
+ ValueError
+ |
+
+
+
+ If parameters are invalid somehow. + |
+
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ The index ID. This ID should be used to make subsequent queries using this +newly created index. + |
+
Examples:
+>>> index_id = das.create_field_index('link', 'tag', type='Expression')
+>>> index_id = das.create_field_index('link', 'tag', composite_type=['Expression', 'Symbol', 'Symbol', ['Expression', 'Symbol', 'Symbol', 'Symbol']])
+
custom_query(index_id, **kwargs)
+
+Perform a query using a previously created custom index.
+Actual query parameters can be passed as kwargs according to the type of the previously +created filter.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
index_id |
+
+ str
+ |
+
+
+
+ custom index id to be used in the query. + |
+ + required + | +
Raises:
+Type | +Description | +
---|---|
+ NotImplementedError
+ |
+
+
+
+ If called from Local DAS in RAM only. + |
+
Returns:
+Type | +Description | +
---|---|
+ Union[Iterator, List[Dict[str, Any]]]
+ |
+
+
+
+ Union[Iterator, List[Dict[str, Any]]]: An iterator or list of dict containing atom data. + |
+
Examples:
+>>> das.custom_query(index_id='index_123', tag='DAS')
+>>> das.custom_query(index_id='index_123', tag='DAS', no_iterator=True)
+
fetch(query=None, host=None, port=None, **kwargs)
+
+Fetch, from a DAS Server, all atoms that match the passed query or +all atoms in the server if None is passed as query.
+Instead of adding atoms by calling add_node() and add_link() directly, +it's possible to fetch all or part of the contents from a DAS server using the +method fetch(). This method doesn't create a lasting connection with the DAS +server, it will just fetch the atoms once and close the connection so any +subsequent changes or queries will not be propagated to the server in any way. +After fetching the atoms, all queries will be made locally. It's possible to +call fetch() multiple times fetching from the same DAS Server or from different +ones.
+The input query is a link, used as a pattern to make the query. +Variables can be used as link targets as well as nodes. Nested links are +allowed as well.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
query |
+
+ Optional[Union[List[dict], dict]]
+ |
+
+
+
+ A pattern described as a link (possibly with nested links) +with nodes and variables used to query the knowledge base. Defaults to None + |
+
+ None
+ |
+
host |
+
+ Optional[str]
+ |
+
+
+
+ Address to remote server. Defaults to None. + |
+
+ None
+ |
+
port |
+
+ Optional[int]
+ |
+
+
+
+ Port to remote server. Defaults to None. + |
+
+ None
+ |
+
Raises:
+Type | +Description | +
---|---|
+ ValueError
+ |
+
+
+
+ If parameters ar somehow invalid. + |
+
Returns:
+Type | +Description | +
---|---|
+ Union[None, List[dict]]
+ |
+
+
+
+ Union[None, List[dict]]: Returns None. + |
+
+ Union[None, List[dict]]
+ |
+
+
+
+ If runing on the server returns a list of dictionaries containing detailed information of the atoms. + |
+
Examples:
+>>> query = {
+ "atom_type": "link",
+ "type": "Expression",
+ "targets": [
+ {"atom_type": "node", "type": "Symbol", "name": "Inheritance"},
+ {"atom_type": "variable", "name": "v1"},
+ {"atom_type": "node", "type": "Symbol", "name": '"mammal"'},
+ ],
+ }
+ das = DistributedAtomSpace()
+ das.fetch(query, host='123.4.5.6', port=8080)
+
get_atom(handle, **kwargs)
+
+Retrieve an atom given its handle.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
handle |
+
+ str
+ |
+
+
+
+ Atom's handle. + |
+ + required + | +
Returns:
+Name | Type | +Description | +
---|---|---|
Dict |
+ Dict[str, Any]
+ |
+
+
+
+ A Python dict with all atom data. + |
+
Raises:
+Type | +Description | +
---|---|
+ AtomDoesNotExist
+ |
+
+
+
+ If the corresponding atom doesn't exist. + |
+
Examples:
+>>> human_handle = das.get_node_handle(node_type='Concept', node_name='human')
+>>> result = das.get_atom(human_handle)
+>>> print(result)
+{
+ 'handle': 'af12f10f9ae2002a1607ba0b47ba8407',
+ 'composite_type_hash': 'd99a604c79ce3c2e76a2f43488d5d4c3',
+ 'name': 'human',
+ 'named_type': 'Concept'
+}
+
get_incoming_links(atom_handle, **kwargs)
+
+Retrieve all links which has the passed handle as one of its targets.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
atom_handle |
+
+ str
+ |
+
+
+
+ Atom's handle + |
+ + required + | +
Returns:
+Type | +Description | +
---|---|
+ List[Union[Dict[str, Any], str]]
+ |
+
+
+
+ List[Dict[str, Any]]: A list of dictionaries containing detailed information of the atoms + |
+
+ List[Union[Dict[str, Any], str]]
+ |
+
+
+
+ or a list of strings containing the atom handles + |
+
Examples:
+>>> rhino = das.get_node_handle('Concept', 'rhino')
+>>> links = das.get_incoming_links(rhino)
+>>> for link in links:
+>>> print(link['type'], link['targets'])
+Similarity ['d03e59654221c1e8fcda404fd5c8d6cb', '99d18c702e813b07260baf577c60c455']
+Similarity ['99d18c702e813b07260baf577c60c455', 'd03e59654221c1e8fcda404fd5c8d6cb']
+Inheritance ['99d18c702e813b07260baf577c60c455', 'bdfe4e7a431f73386f37c6448afe5840']
+
get_link(link_type, link_targets)
+
+Retrieve a link given its type and list of targets.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
link_type |
+
+ str
+ |
+
+
+
+ Link type + |
+ + required + | +
link_targets |
+
+ List[str]
+ |
+
+
+
+ List of target handles. + |
+ + required + | +
Returns:
+Name | Type | +Description | +
---|---|---|
Dict |
+ Dict[str, Any]
+ |
+
+
+
+ A Python dict with all link data. + |
+
Raises:
+Type | +Description | +
---|---|
+ LinkDoesNotExist
+ |
+
+
+
+ If the corresponding link doesn't exist. + |
+
Examples:
+>>> human_handle = das.get_node_handle('Concept', 'human')
+>>> monkey_handle = das.get_node_handle('Concept', 'monkey')
+>>> result = das.get_link(
+ link_type='Similarity',
+ link_targets=[human_handle, monkey_handle],
+ )
+>>> print(result)
+{
+ 'handle': 'bad7472f41a0e7d601ca294eb4607c3a',
+ 'composite_type_hash': 'ed73ea081d170e1d89fc950820ce1cee',
+ 'is_toplevel': True,
+ 'composite_type': [
+ 'a9dea78180588431ec64d6bc4872fdbc',
+ 'd99a604c79ce3c2e76a2f43488d5d4c3',
+ 'd99a604c79ce3c2e76a2f43488d5d4c3'
+ ],
+ 'named_type': 'Similarity',
+ 'named_type_hash': 'a9dea78180588431ec64d6bc4872fdbc',
+ 'targets': [
+ 'af12f10f9ae2002a1607ba0b47ba8407',
+ '1cdffc6b0b89ff41d68bec237481d1e1'
+ ]
+}
+
get_link_handle(link_type, link_targets)
+
+
+ staticmethod
+
+
+Computes the handle of a link, given its type and targets' handles.
+Note that this is a static method which don't actually query the stored atomspace +in order to compute the handle. Instead, it just run a MD5 hashing algorithm on +the parameters that uniquely identify links (i.e. type and list of targets) This +means e.g. that two links with the same type and the same targets are considered +to be the exact same entity as they will have the same handle.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
link_type |
+
+ str
+ |
+
+
+
+ Link type. + |
+ + required + | +
link_targets |
+
+ List[str]
+ |
+
+
+
+ List with the target handles. + |
+ + required + | +
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ Link's handle. + |
+
Examples:
+>>> human_handle = das.get_node_handle(node_type='Concept', node_name='human')
+>>> monkey_handle = das.get_node_handle(node_type='Concept', node_name='monkey')
+>>> result = das.get_link_handle(link_type='Similarity', targets=[human_handle, monkey_handle])
+>>> print(result)
+"bad7472f41a0e7d601ca294eb4607c3a"
+
get_links(link_type, target_types=None, link_targets=None, **kwargs)
+
+Retrieve all links that match the passed search criteria.
+This method can be used in four different ways.
+Retrieve all the links of a given type
+Set link_type to the desired type and set target_types=None and +link_targets=None.
+Retrieve all the links of a given type whose targets are of given types.
+Set link_type to the disered type and target_types to a list with the desired +types os each target.
+Retrieve all the links of a given type whose targets match a given list of + handles
+Set link_type to the desired type (or pass link_type='' to retrieve links +of any type) and set link_targets to a list of handles. Any handle in this +list can be '' meaning that any handle in that position of the targets list +is a match for the query. Set target_types=None.
+Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
link_type |
+
+ str
+ |
+
+
+
+ Link type being searched (can be '*' when link_targets is not None). + |
+ + required + | +
target_types |
+
+ List[str]
+ |
+
+
+
+ Template of target types being searched. + |
+
+ None
+ |
+
link_targets |
+
+ List[str]
+ |
+
+
+
+ Template of targets being searched (handles or '*'). + |
+
+ None
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[Iterator, List[Dict[str, Any]]]
+ |
+
+
+
+ Union[Iterator, List[Dict[str, Any]]]: A list of dictionaries containing detailed + |
+
+ Union[Iterator, List[Dict[str, Any]]]
+ |
+
+
+
+ information of the links + |
+
Examples:
+1. Retrieve all the links of a given type
+
+ >>> links = das.get_links(link_type='Inheritance')
+ >>> for link in links:
+ >>> print(link['type'], link['targets'])
+ Inheritance ['5b34c54bee150c04f9fa584b899dc030', 'bdfe4e7a431f73386f37c6448afe5840']
+ Inheritance ['b94941d8cd1c0ee4ad3dd3dcab52b964', '80aff30094874e75028033a38ce677bb']
+ Inheritance ['bb34ce95f161a6b37ff54b3d4c817857', '0a32b476852eeb954979b87f5f6cb7af']
+ ...
+
+2. Retrieve all the links of a given type whose targets are of given types.
+
+ >>> links = das.get_links(link_type='Inheritance', target_types=['Concept', 'Concept'])
+ >>> for link in links:
+ >>> print(link['type'], link['targets'])
+ Inheritance ['5b34c54bee150c04f9fa584b899dc030', 'bdfe4e7a431f73386f37c6448afe5840']
+ Inheritance ['b94941d8cd1c0ee4ad3dd3dcab52b964', '80aff30094874e75028033a38ce677bb']
+ Inheritance ['bb34ce95f161a6b37ff54b3d4c817857', '0a32b476852eeb954979b87f5f6cb7af']
+ ...
+
+3. Retrieve all the links of a given type whose targets match a given list of
+ handles
+
+ >>> snake = das.get_node_handle('Concept', 'snake')
+ >>> links = das.get_links(link_type='Similarity', link_targets=[snake, '*'])
+ >>> for link in links:
+ >>> print(link['type'], link['targets'])
+ Similarity ['c1db9b517073e51eb7ef6fed608ec204', 'b94941d8cd1c0ee4ad3dd3dcab52b964']
+ Similarity ['c1db9b517073e51eb7ef6fed608ec204', 'bb34ce95f161a6b37ff54b3d4c817857']
+
+
+ get_node(node_type, node_name)
+
+Retrieve a node given its type and name.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
node_type |
+
+ str
+ |
+
+
+
+ Node type + |
+ + required + | +
node_name |
+
+ str
+ |
+
+
+
+ Node name + |
+ + required + | +
Returns:
+Name | Type | +Description | +
---|---|---|
Dict |
+ Dict[str, Any]
+ |
+
+
+
+ A Python dict with all node data. + |
+
Raises:
+Type | +Description | +
---|---|
+ NodeDoesNotExist
+ |
+
+
+
+ If the corresponding node doesn't exist. + |
+
Examples:
+>>> result = das.get_node(
+ node_type='Concept',
+ node_name='human'
+ )
+>>> print(result)
+{
+ 'handle': 'af12f10f9ae2002a1607ba0b47ba8407',
+ 'composite_type_hash': 'd99a604c79ce3c2e76a2f43488d5d4c3',
+ 'name': 'human',
+ 'named_type': 'Concept'
+}
+
get_node_handle(node_type, node_name)
+
+
+ staticmethod
+
+
+Computes the handle of a node, given its type and name.
+Note that this is a static method which don't actually query the stored atomspace +in order to compute the handle. Instead, it just run a MD5 hashing algorithm on +the parameters that uniquely identify nodes (i.e. type and name) This means e.g. +that two nodes with the same type and the same name are considered to be the exact +same entity as they will have the same handle.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
node_type |
+
+ str
+ |
+
+
+
+ Node type + |
+ + required + | +
node_name |
+
+ str
+ |
+
+
+
+ Node name + |
+ + required + | +
Returns:
+Name | Type | +Description | +
---|---|---|
str |
+ str
+ |
+
+
+
+ Node's handle + |
+
Examples:
+>>> result = das.get_node_handle(node_type='Concept', node_name='human')
+>>> print(result)
+"af12f10f9ae2002a1607ba0b47ba8407"
+
get_traversal_cursor(handle, **kwargs)
+
+Create and return a Traverse Engine, an object that can be used to traverse the +atomspace hypergraph.
+A TraverseEngine is like a cusor which points to an atom in the hypergraph and +can be used to probe for links and neighboring atoms and then move on by +following links. It's functioning is closely tied to the cache system in order +to optimize the order in which atoms are presented to the caller when probing +the neighborhood and to use cache's "atom paging" capabilities to minimize +latency when used in remote DAS.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
handle |
+
+ str
+ |
+
+
+
+ Atom's handle + |
+ + required + | +
Raises:
+Type | +Description | +
---|---|
+ GetTraversalCursorException
+ |
+
+
+
+ If passed handle is invalid, somehow (e.g. if + |
+
Returns:
+Name | Type | +Description | +
---|---|---|
TraverseEngine |
+ TraverseEngine
+ |
+
+
+
+ The object that allows traversal of the hypergraph. + |
+
query(query, parameters={})
+
+Perform a query on the knowledge base using a dict as input and return an +iterator of QueryAnswer objects. Each such object carries the resulting mapping +of variables in the query and the corresponding subgraph which is the result +of applying such mapping to rewrite the query.
+The input dict is a link, used as a pattern to make the query. +Variables can be used as link targets as well as nodes. Nested links are +allowed as well.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
query |
+
+ Union[List[Dict[str, Any]], Dict[str, Any]]
+ |
+
+
+
+ A pattern described as a +link (possibly with nested links) with nodes and variables used to query +the knowledge base. If the query is represented as a list of dictionaries, +it is interpreted as a conjunction (AND) of all queries within the list. + |
+ + required + | +
parameters |
+
+ Dict[str, Any]
+ |
+
+
+
+ query optional parameters + |
+
+ {}
+ |
+
Returns:
+Type | +Description | +
---|---|
+ Union[Iterator[QueryAnswer], List[QueryAnswer]]
+ |
+
+
+
+ Iterator[QueryAnswer]: An iterator of QueryAnswer objects, which have a field 'assignment', +with a mapping from variables to handles and another field 'subgraph', +with the resulting subgraph after applying 'assignment' to rewrite the query. + |
+
Raises:
+Type | +Description | +
---|---|
+ UnexpectedQueryFormat
+ |
+
+
+
+ If query resolution lead to an invalid state + |
+
Examples:
+>>> das.add_link({
+ "type": "Expression",
+ "targets": [
+ {"type": "Symbol", "name": "Test"},
+ {
+ "type": "Expression",
+ "targets": [
+ {"type": "Symbol", "name": "Test"},
+ {"type": "Symbol", "name": "2"}
+ ]
+ }
+ ]
+})
+>>> query_params = {"toplevel_only": False}
+>>> q1 = {
+ "atom_type": "link",
+ "type": "Expression",
+ "targets": [
+ {"atom_type": "variable", "name": "v1"},
+ {
+ "atom_type": "link",
+ "type": "Expression",
+ "targets": [
+ {"atom_type": "variable", "name": "v2"},
+ {"atom_type": "node", "type": "Symbol", "name": "2"},
+ ]
+ }
+ ]
+}
+>>> for result in das.query(q1, query_params):
+>>> print(result.assignment.mapping['v1'])
+>>> print(result.assignment.mapping['v2'])
+>>> print(result.assignment.subgraph)
+'233d9a6da7d49d4164d863569e9ab7b6'
+'963d66edfb77236054125e3eb866c8b5'
+[
+ {
+ 'handle': 'dbcf1c7b610a5adea335bf08f6509978',
+ 'type': 'Expression',
+ 'template': ['Expression', 'Symbol', ['Expression', 'Symbol', 'Symbol']],
+ 'targets': [
+ {'handle': '963d66edfb77236054125e3eb866c8b5', 'type': 'Symbol', 'name': 'Test'},
+ {
+ 'handle': '233d9a6da7d49d4164d863569e9ab7b6',
+ 'type': 'Expression',
+ 'template': ['Expression', 'Symbol', 'Symbol'],
+ 'targets': [
+ {'handle': '963d66edfb77236054125e3eb866c8b5', 'type': 'Symbol', 'name': 'Test'},
+ {'handle': '9f27a331633c8bc3c49435ffabb9110e', 'type': 'Symbol', 'name': '2'}
+ ]
+ }
+ ]
+ }
+]
+
+
+ reindex(pattern_index_templates=None)
+
+Rebuild all indexes according to the passed specification
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
pattern_index_templates |
+
+ Optional[Dict[str, Dict[str, Any]]]
+ |
+
+
+
+ indexes are specified by atom type in a dict mapping from atom types +to a pattern template: +{
+ Pattern template is also a dict: +{ + "named_type": True/False + "selected_positions": [n1, n2, ...] +} +Pattern templates are applied to each link entered in the atom space in order to determine +which entries should be created in the inverted pattern index. Entries in the inverted +pattern index are like patterns where the link type and each of its targets may be replaced +by wildcards. For instance, given a similarity link Similarity(handle1, handle2) it could be +used to create any of the following entries in the inverted pattern index: +
+If we create all possibilities of index entries to all links, the pattern index size will +grow exponentially so we limit the entries we want to create by each type of link. This is +what a pattern template for a given link type is. For instance if we apply this pattern +template: +{ + "named_type": False + "selected_positions": [0, 1] +} +to Similarity(handle1, handle2) we'll create only the following entries: +
+If we apply this pattern template instead: +{ + "named_type": True + "selected_positions": [1] +} +We'll have: +
+ |
+
+ None
+ |
+
TraverseEngine
+
+
+follow_link(**kwargs)
+
+Update the current cursor by following the first of the neighbors that points to the current cursor.
+In this method it's possible pass the following parameters:
+n
of their target are returned.filter=F: F is a function or a tuple of functions That is used to filter the results after applying all other filters. F should expect a dict (the atom document) and return True if and only if this atom should be kept.
+Possible use cases to filter parameter:
+a. traverse.get_neighbors(..., filter=custom_filter)
+ -> The custom_filter will be applied to Links
+b. traverse.get_neighbors(..., filter=(custom_filter1, custom_filter2))
+ -> The custom_filter1 will be applied to Links and custom_filter2 will be applied to Targets
+c. traverse.get_neighbors(..., filter=(None, custom_filter2))
+ -> The custom_filter2 will only be applied to Targets. This way there is no filter to Links
+d. traverse.get_neighbors(..., filter=(custom_filter1, None))
+ -> The custom_filter1 will be applied to Links. This case is equal case a
Returns:
+Type | +Description | +
---|---|
+ Dict[str, Any]
+ |
+
+
+
+ Dict[str, Any]: The current cursor. A Python dict with all atom data. + |
+
get()
+
+Returns the current cursor.
+ + + +Returns:
+Type | +Description | +
---|---|
+ Dict[str, Any]
+ |
+
+
+
+ Dict[str, Any]: The current cursor. A Python dict with all atom data. + |
+
get_links(**kwargs)
+
+Returns all links that have the current cursor as one of their targets, that is, any links that point to the cursor.
+In this method it's possible pass the following parameters:
+n
of their target are returned.Returns:
+Name | Type | +Description | +
---|---|---|
Iterator |
+ Iterator
+ |
+
+
+
+ An iterator that contains the links that match the criteria. + |
+
Examples:
+>>> def has_score(atom):
+ if 'score' in atom and score > 0.5:
+ return True
+ return False
+>>> links = traverse_engine.get_links(
+ link_type='Ex',
+ cursor_position=2,
+ target_type='Sy',
+ filter=has_score
+ )
+>>> next(links)
+
get_neighbors(**kwargs)
+
+Get all of "neighbors" that pointing to current cursor.
+In this method it's possible pass the following parameters:
+n
of their target are returned.filter=F: F is a function or a tuple of functions That is used to filter the results after applying all other filters. F should expect a dict (the atom document) and return True if and only if this atom should be kept.
+Possible use cases to filter parameter:
+a. traverse.get_neighbors(..., filter=custom_filter)
+ -> The custom_filter will be applied to Links
+b. traverse.get_neighbors(..., filter=(custom_filter1, custom_filter2))
+ -> The custom_filter1 will be applied to Links and custom_filter2 will be applied to Targets
+c. traverse.get_neighbors(..., filter=(None, custom_filter2))
+ -> The custom_filter2 will only be applied to Targets. This way there is no filter to Links
+d. traverse.get_neighbors(..., filter=(custom_filter1, None))
+ -> The custom_filter1 will be applied to Links. This case is equal case a
Returns:
+Name | Type | +Description | +
---|---|---|
Iterator |
+ Iterator
+ |
+
+
+
+ An iterator that contains the neighbors that match the criteria. + |
+
Examples:
+>>> neighbors = traverse_engine.get_neighbors(
+ link_type='Ex',
+ cursor_position=2,
+ target_type='Sy',
+ filter=(link_filter, target_filter)
+ )
+>>> next(neighbors)
+
goto(handle)
+
+Reset current cursor to the passed handle.
+ + + +Parameters:
+Name | +Type | +Description | +Default | +
---|---|---|---|
handle |
+
+ str
+ |
+
+
+
+ The handle of the atom to go to + |
+ + required + | +
Raises:
+Type | +Description | +
---|---|
+ AtomDoesNotExist
+ |
+
+
+
+ If the corresponding atom doesn't exist + |
+
Returns:
+Type | +Description | +
---|---|
+ Dict[str, Any]
+ |
+
+
+
+ Dict[str, Any]: The current cursor. A Python dict with all atom data. + |
+
Examples:
+>>> traverse_engine.goto('asd1234567890')
+>>> {
+ 'handle': 'asd1234567890',
+ 'type': 'AI,
+ 'composite_type_hash': 'd99asd1234567890',
+ 'name': 'snet',
+ 'named_type': 'AI'
+ }
+