Skip to content

Latest commit

 

History

History
52 lines (31 loc) · 3.51 KB

ai-use-cases.md

File metadata and controls

52 lines (31 loc) · 3.51 KB
description
Overview of current Artificial Intelligence use cases built on the Verida Network

AI Use Cases

The Verida infrastructure enables application developers to build AI products and services using private user data.

Big tech own us all. They own our data and they monetize our data without any of that value being returned to us. With Verida, we enable users to take custody of their data and use it at their leisure thanks to selective disclosure and permissioned access. Users have full control and choice.

Verida enables developers to build hyper-personalized AI agents based on your data or aggregated user data, without compromising individual information. The problem with big tech data usage is that individual data packets can be traced back to you as a user. But this is no longer necessary. Verida's confidential infrastructure enables user data to be pooled so the underlying information is available for LLM's to learn and benefit, without this data being tied to you as an individual, in turn delivering a more equitable result for both user and application.

AI Training / Data Marketplaces / Data Pools

High quality data is necessary to train AI models. Verida provides infrastructure allowing individuals to extra their data from existing platforms (ie: social, health, finance, messaging) and consensually share that data with AI data marketplaces.

Data marketplaces can be built to request data from end users via private-data-apis.md, verify the authenticity of data via the Verida Network and even reward end users with crypto.

Data pools can be created, and then connected to AI models, to provide expert knowledge bases built on private user data, without exposing the actual user data.

Retrieval-augmented generation (RAG)

Verida enables private user data to be connected to Large Language Models (LLM) via RAG to create powerful, personalized AI products and services.

For example:

  1. Query a user's private AI prompt to provide personalized services (ie: Where have I travelled in the last 5 years? to then provide recommendations on a holiday for next year)
  2. Query a user's private data to feed into a custom RAG pipeline to personalize an existing product or service

AI Agents

The Verida Data Bridge infrastructure enables users to connect to third party APIs and extract their data. This user data can then be used by AI agents to know everything about a user.

{% hint style="info" %} Coming soon: We will enable third party applications (such as AI Agents) to perform actions based on connected services. ie: Send an email, Send a Telegram message, Order an Uber. {% endhint %}

Confidential Compute (Coming Soon)

Verida is building out self-sovereign confidential-compute.mdinfrastructure (leveraging Trusted Execution Environments) that operates in secure enclaves to enforce privacy and security of user data and any computation that occurs. This differs from current cloud AI (OpenAI, Groq, etc.) which run on non-privacy preserving, generic compute.

Comparing current AI compute with Verida AI compute

This is ideally suited for confidential AI:

  • Training of new models
  • Inference for AI prompts
  • Agents acting on behalf of users

These compute resources have direct, fast, access to user data via private-data-apis.md.