-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could not find protobufConfig.cmake
#4
Comments
After stubbing out that code, I now get
I have both protobuf and grpc installed. |
Can you try replacing atomspace-rpc/src/CMakeLists.txt Line 10 in 731ccee
with |
As for the gRPC, please check this issue comment from the library maintainers. Basically, you'll have to use the following
I will send PR for this two issues tomorrow morning. |
That fixed the protobuf issue! Thanks! |
For grpc, I just used |
Well, this is weird. After your suggestion, protobuf was found:
Then I ran
which it did not do, the first time around. I'm investigating now. |
I'm stumped. Here's the meta-issue: I wanted to dump the contents of the atomspace for @amirouche as an example of an "s-expression database". Along the way, I tried updating The meta-question: why does Re: fragility: I'm also thinking that making |
I hacked around the |
Ugh. Now I'm getting
I get the feeling this code is .. not well-maintained. |
I haven't checked this code in a while. I will update it tomorrow. |
FWIW, I hacked around in |
Hi @linas , I have updated the CMAKE files and the README. Can you try again? |
The
True. I am using fibers because I needed Go-like concurrency to pass messages between the code that run spattern matching queries, the atomese->json parser and the file writer that writes the results. And I am depending on a specific verison because the original version of fibers uses
I agree. And that will require a lot of changes. But right now, I am busy with other projects to expend such an effort on replacing fibers. Hopefully, I will do it some time soon :) |
Yes. Wiki page is here: https://wiki.opencog.org/w/StorageNode and the demos are here: https://github.com/opencog/atomspace/tree/master/examples/atomspace The correct order to read the demos is: It's one common API that works for file, SQL, RocksDB and network access. See https://wiki.opencog.org/w/CogStorageNode for the network variant.
To use the network node, two are needed: https://github.com/opencog/cogserver and https://github.com/opencog/atomspace-cog FWIW, one could "easily" implement any other kind of networking using this same API. Although I don't see much point, since I think the CogStorageNode will be pretty danged fast. It's got very little overhead. |
BTW: distantly related, but perhaps of interest: I have this vague idea that perhaps it would be possible to write a shim that maps specific atoms to specific dataset queries, at run-time, on demand. So, for example: currently, you use a bulk importer to import all kinds of different bio data sources into the atomspace. This means that you have to wait, maybe a long time, before all those datasets get loaded. I'm thinking that it might be possible to write a mapper that converts Right now, this is just a hunch - I think it should be possible, but hard to tell till one tries to code it up. It seems like it could be a good way of interfacing to multiple data sources at once, instead of having to import them. Even better if it can be made read/write. Assuming you can live with the latency / overhead. This wouldn't be networked (unless the datasource is networked) -- rather, its just s shim for converting Atomese into some other format, and back, on the fly, as-needed. |
Sigh. The instructions call for building grpc from source, and that seems like a step too far. I'm going to punt. Mostly, I just needed to export the old genome dataset so that @amirouche could look at it; I did that, and so I don't need the rest of the baggage. |
You are referring to something like Neo4j ETL Tool but not limited to RDBMS, right?. And I think this is a great idea. It will reduce time of writing code to convert some data format to
Users who don't want to deal with the latency can wait till everything is loaded to the Atomspace and proceed from there. But there will be cases where this won't be much of a problem, so it is win-win. |
Completely understood. I remember also being frustrated by protobuf/grpc cmake issues while writing the code. With regards to JSON version of the bio dataset, I think @tanksha has already converted it to JSON and can share. |
I used the annotation-scheme JSON parser and covert the bio-atomspace into JSON, it can be found here https://mozi.ai/datasets/bioas-json/. Note that the format is designed to be compatible with the cytoscape visualizer. As for the GRPC dependency issue, I set |
Yes. Something like that.
I made a strong argument about why JSON is exactly the wrong format. (I mean, it might be great for the bio data, its just wrong in general, for other things) The general discussion is about a database of s-expressions. There already exist several databases of JSON expressions, at least some having commercial companies behind them. I think that a database of s-expressions is still a good commercial startup idea. Just that, well, the local crowd around these parts does not have much business sense. But if you think you know someone who could be a tech startup CEO, or someone who can do marketing and sales, then building and selling an s-expression database -- I think its a viable business. |
I built grpc from source. Took ages. But it worked fine on this point. (I'm hitting another issue: #6 ) |
I'm trying to build this package, (so that I can build the latest
annotation-scheme
, and its now failing with... ???
The text was updated successfully, but these errors were encountered: