Name of this Analyzer.
+See also databases.Database.
+Creates a new Analyzer with the given options
and the instance's name.
See also databases.Database#createAnalyzer.
+Options for creating the Analyzer.
+const db = new Database();
const analyzer = db.analyzer("potatoes");
await analyzer.create({ type: "identity" });
// the identity Analyzer "potatoes" now exists
+
+Deletes the Analyzer from the database.
+Whether the Analyzer should still be deleted even if it +is currently in use.
+const db = new Database();
const analyzer = db.analyzer("some-analyzer");
await analyzer.drop();
// the Analyzer "some-analyzer" no longer exists
+
+Retrieves the Analyzer definition for the Analyzer.
+const db = new Database();
const analyzer = db.analyzer("some-analyzer");
const definition = await analyzer.get();
// definition contains the Analyzer definition
+
+The BatchCursor
provides a batch-wise API to an Cursor.
When using TypeScript, cursors can be cast to a specific item type in order +to increase type safety.
+const db = new Database();
const query = aql`FOR x IN 1..5 RETURN x`;
const cursor = await db.query(query) as Cursor<number>;
const batches = cursor.batches;
+
+const db = new Database();
const query = aql`FOR x IN 1..10000 RETURN x`;
const cursor = await db.query(query, { batchSize: 10 });
for await (const batch of cursor.batches) {
// Process all values in a batch in parallel
await Promise.all(batch.map(
value => asyncProcessValue(value)
));
}
+
+Total number of documents in the query result. Only available if the
+count
option was used.
Additional information about the cursor.
+Whether the cursor has any remaining batches that haven't yet been
+fetched. If set to false
, all batches have been fetched and no
+additional requests to the server will be made when consuming any
+remaining batches from this cursor.
Whether the cursor has more batches. If set to false
, the cursor has
+already been depleted and contains no more batches.
ID of this cursor.
+A low-level interface for consuming the items of this BatchCursor.
+Enables use with for await
to deplete the cursor by asynchronously
+yielding every batch in the cursor's remaining result set.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+const cursor = await db.query(aql`
FOR user IN users
FILTER user.isActive
RETURN user
`);
for await (const users of cursor.batches) {
for (const user of users) {
console.log(user.email, user.isAdmin);
}
}
+
+Depletes the cursor, then returns an array containing all batches in the +cursor's remaining result list.
+const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 2 }
);
const result = await cursor.batches.all(); // [[1, 2], [3, 4], [5]]
console.log(cursor.hasNext); // false
+
+Depletes the cursor by applying the callback
function to each batch in
+the cursor's remaining result list. Returns an array containing the
+return values of callback
for each batch, flattened to a depth of 1.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.flatMap
.
const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 2 }
);
const squares = await cursor.batches.flatMap((currentBatch) => {
return currentBatch.map((value) => value ** 2);
});
console.log(squares); // [1, 1, 2, 4, 3, 9, 4, 16, 5, 25]
console.log(cursor.hasNext); // false
+
+const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 1 }
);
const odds = await cursor.batches.flatMap((currentBatch) => {
if (currentBatch[0] % 2 === 0) {
return []; // empty array flattens into nothing
}
return currentBatch;
});
console.logs(odds); // [1, 3, 5]
+
+Advances the cursor by applying the callback
function to each item in
+the cursor's remaining result list until the cursor is depleted or
+callback
returns the exact value false
. Returns a promise that
+evalues to true
unless the function returned false
.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.forEach
.
Function to execute on each element.
+const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 2 }
);
const result = await cursor.batches.forEach((currentBatch) => {
for (const value of currentBatch) {
console.log(value);
}
});
console.log(result) // true
console.log(cursor.hasNext); // false
+
+const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 2 }
);
const result = await cursor.batches.forEach((currentBatch) => {
for (const value of currentBatch) {
console.log(value);
}
return false; // stop after the first batch
});
console.log(result); // false
console.log(cursor.hasNext); // true
+
+Drains the cursor and frees up associated database resources.
+This method has no effect if all batches have already been consumed.
+const cursor1 = await db.query(aql`FOR x IN 1..5 RETURN x`);
console.log(cursor1.hasMore); // false
await cursor1.kill(); // no effect
const cursor2 = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 2 }
);
console.log(cursor2.hasMore); // true
await cursor2.kill(); // cursor is depleted
+
+Loads all remaining batches from the server.
+Warning: This may impact memory use when working with very large +query result sets.
+const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 1 }
);
console.log(cursor.hasMore); // true
await cursor.batches.loadAll();
console.log(cursor.hasMore); // false
console.log(cursor.hasNext); // true
for await (const item of cursor) {
console.log(item);
// No server roundtrips necessary any more
}
+
+Depletes the cursor by applying the callback
function to each batch in
+the cursor's remaining result list. Returns an array containing the
+return values of callback
for each batch.
Note: This creates an array of all return values, which may impact +memory use when working with very large query result sets. Consider using +BatchCursor#forEach, BatchCursor#reduce or +BatchCursor#flatMap instead.
+See also:
+Array.prototype.map
.
const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 2 }
);
const squares = await cursor.batches.map((currentBatch) => {
return currentBatch.map((value) => value ** 2);
});
console.log(squares); // [[1, 4], [9, 16], [25]]
console.log(cursor.hasNext); // false
+
+Advances the cursor and returns all remaining values in the cursor's
+current batch. If the current batch has already been exhausted, fetches
+the next batch from the server and returns it, or undefined
if the
+cursor has been depleted.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+const cursor = await db.query(
aql`FOR i IN 1..10 RETURN i`,
{ batchSize: 5 }
);
const firstBatch = await cursor.batches.next(); // [1, 2, 3, 4, 5]
await cursor.next(); // 6
const lastBatch = await cursor.batches.next(); // [7, 8, 9, 10]
console.log(cursor.hasNext); // false
+
+Depletes the cursor by applying the reducer
function to each batch in
+the cursor's remaining result list. Returns the return value of reducer
+for the last batch.
Note: Most complex uses of the reduce
method can be replaced with
+simpler code using BatchCursor#forEach or the for await
+syntax.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.reduce
.
function largestValue(baseline, values) {
return Math.max(baseline, ...values);
}
const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 3 }
);
const result = await cursor.batches.reduce(largestValue, 0);
console.log(result); // 5
console.log(cursor.hasNext); // false
const emptyResult = await cursor.batches.reduce(largestValue, 0);
console.log(emptyResult); // 0
+
+// BAD! NEEDLESSLY COMPLEX!
const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 1 }
);
const result = await cursor.reduce((accumulator, currentBatch) => {
if (currentBatch[0] % 2 === 0) {
accumulator.even.push(...currentBatch);
} else {
accumulator.odd.push(...currentBatch);
}
return accumulator;
}, { odd: [], even: [] });
console.log(result); // { odd: [1, 3, 5], even: [2, 4] }
// GOOD! MUCH SIMPLER!
const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const odd = [];
const even = [];
for await (const currentBatch of cursor) {
if (currentBatch[0] % 2 === 0) {
even.push(...currentBatch);
} else {
odd.push(...currentBatch);
}
}
console.log({ odd, even }); // { odd: [1, 3, 5], even: [2, 4] }
+
+Depletes the cursor by applying the reducer
function to each batch in
+the cursor's remaining result list. Returns the return value of reducer
+for the last batch.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.reduce
.
function largestValue(values1, values2) {
return [Math.max(...values1, ...values2)];
}
const cursor = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 3 }
);
const result = await cursor.batches.reduce(largestValue);
console.log(result); // [5]
console.log(cursor.hasNext); // false
+
+The Cursor
type represents a cursor returned from a
+databases.Database#query.
When using TypeScript, cursors can be cast to a specific item type in order +to increase type safety.
+See also BatchCursor.
+const db = new Database();
const query = aql`FOR x IN 1..5 RETURN x`;
const result = await db.query(query) as Cursor<number>;
+
+const db = new Database();
const query = aql`FOR x IN 1..10 RETURN x`;
const cursor = await db.query(query);
for await (const value of cursor) {
// Process each value asynchronously
await processValue(value);
}
+
+A BatchCursor providing batch-wise access to the cursor +result set.
+See also BatchCursor#items.
+Total number of documents in the query result. Only available if the
+count
option was used.
Additional information about the cursor.
+Whether the cursor has more values. If set to false
, the cursor has
+already been depleted and contains no more items.
ID of this cursor.
+Enables use with for await
to deplete the cursor by asynchronously
+yielding every value in the cursor's remaining result set.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+const cursor = await db.query(aql`
FOR user IN users
FILTER user.isActive
RETURN user
`);
for await (const user of cursor) {
console.log(user.email, user.isAdmin);
}
+
+Depletes the cursor, then returns an array containing all values in the +cursor's remaining result list.
+const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const result = await cursor.all(); // [1, 2, 3, 4, 5]
console.log(cursor.hasNext); // false
+
+Depletes the cursor by applying the callback
function to each item in
+the cursor's remaining result list. Returns an array containing the
+return values of callback
for each item, flattened to a depth of 1.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.flatMap
.
const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const squares = await cursor.flatMap((currentValue) => {
return [currentValue, currentValue ** 2];
});
console.log(squares); // [1, 1, 2, 4, 3, 9, 4, 16, 5, 25]
console.log(cursor.hasNext); // false
+
+const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const odds = await cursor.flatMap((currentValue) => {
if (currentValue % 2 === 0) {
return []; // empty array flattens into nothing
}
return currentValue; // or [currentValue]
});
console.logs(odds); // [1, 3, 5]
+
+Advances the cursor by applying the callback
function to each item in
+the cursor's remaining result list until the cursor is depleted or
+callback
returns the exact value false
. Returns a promise that
+evalues to true
unless the function returned false
.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.forEach
.
Function to execute on each element.
+const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const result = await cursor.forEach((currentValue) => {
console.log(currentValue);
});
console.log(result) // true
console.log(cursor.hasNext); // false
+
+const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const result = await cursor.forEach((currentValue) => {
console.log(currentValue);
return false; // stop after the first item
});
console.log(result); // false
console.log(cursor.hasNext); // true
+
+Kills the cursor and frees up associated database resources.
+This method has no effect if all batches have already been fetched.
+const cursor1 = await db.query(aql`FOR x IN 1..5 RETURN x`);
console.log(cursor1.hasMore); // false
await cursor1.kill(); // no effect
const cursor2 = await db.query(
aql`FOR x IN 1..5 RETURN x`,
{ batchSize: 2 }
);
console.log(cursor2.hasMore); // true
await cursor2.kill(); // cursor is depleted
+
+Depletes the cursor by applying the callback
function to each item in
+the cursor's remaining result list. Returns an array containing the
+return values of callback
for each item.
Note: This creates an array of all return values, which may impact +memory use when working with very large query result sets. Consider using +Cursor#forEach, Cursor#reduce or +Cursor#flatMap instead.
+See also:
+Array.prototype.map
.
const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const squares = await cursor.map((currentValue) => {
return currentValue ** 2;
});
console.log(squares); // [1, 4, 9, 16, 25]
console.log(cursor.hasNext); // false
+
+Advances the cursor and returns the next value in the cursor's remaining
+result list, or undefined
if the cursor has been depleted.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+const cursor = await db.query(aql`FOR x IN 1..3 RETURN x`);
const one = await cursor.next(); // 1
const two = await cursor.next(); // 2
const three = await cursor.next(); // 3
const empty = await cursor.next(); // undefined
+
+Depletes the cursor by applying the reducer
function to each item in
+the cursor's remaining result list. Returns the return value of reducer
+for the last item.
Note: Most complex uses of the reduce
method can be replaced with
+simpler code using Cursor#forEach or the for await
syntax.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.reduce
.
function largestOfTwo(one, two) {
return Math.max(one, two);
}
const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const result = await cursor.reduce(largestOfTwo, 0);
console.log(result); // 5
console.log(cursor.hasNext); // false
const emptyResult = await cursor.reduce(largestOfTwo, 0);
console.log(emptyResult); // 0
+
+// BAD! NEEDLESSLY COMPLEX!
const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const result = await cursor.reduce((accumulator, currentValue) => {
if (currentValue % 2 === 0) {
accumulator.even.push(...currentValue);
} else {
accumulator.odd.push(...currentValue);
}
return accumulator;
}, { odd: [], even: [] });
console.log(result); // { odd: [1, 3, 5], even: [2, 4] }
// GOOD! MUCH SIMPLER!
const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const odd = [];
const even = [];
for await (const currentValue of cursor) {
if (currentValue % 2 === 0) {
even.push(currentValue);
} else {
odd.push(currentValue);
}
}
console.log({ odd, even }); // { odd: [1, 3, 5], even: [2, 4] }
+
+Depletes the cursor by applying the reducer
function to each item in
+the cursor's remaining result list. Returns the return value of reducer
+for the last item.
Note: If the result set spans multiple batches, any remaining batches +will only be fetched on demand. Depending on the cursor's TTL and the +processing speed, this may result in the server discarding the cursor +before it is fully depleted.
+See also:
+Array.prototype.reduce
.
function largestOfTwo(one, two) {
return Math.max(one, two);
}
const cursor = await db.query(aql`FOR x IN 1..5 RETURN x`);
const result = await cursor.reduce(largestOfTwo);
console.log(result); // 5
console.log(cursor.hasNext); // false
const emptyResult = await cursor.reduce(largestOfTwo);
console.log(emptyResult); // undefined
+
+An object representing a single ArangoDB database. All arangojs collections,
+cursors, analyzers and so on are linked to a Database
object.
Creates a new Database
instance with its own connection pool.
See also Database#database.
+Optional
config: ConfigOptionsAn object with configuration options.
+const db = new Database({
url: "http://127.0.0.1:8529",
databaseName: "my_database",
auth: { username: "admin", password: "hunter2" },
});
+
+Creates a new Database
instance with its own connection pool.
See also Database#database.
+Base URL of the ArangoDB server or list of server URLs.
+Equivalent to the url
option in configuration.ConfigOptions.
Optional
name: stringconst db = new Database("http://127.0.0.1:8529", "my_database");
db.useBasicAuth("admin", "hunter2");
+
+Name of the ArangoDB database this instance represents.
+Methods for accessing the server-reported queue times of the mostly +recently received responses.
+Updates the URL list by requesting a list of all coordinators in the +cluster and adding any endpoints not initially specified in the +configuration.ConfigOptions.
+For long-running processes communicating with an ArangoDB cluster it is +recommended to run this method periodically (e.g. once per hour) to make +sure new coordinators are picked up correctly and can be used for +fail-over or load balancing.
+If set to true
, the existing host list will be
+replaced instead of extended.
const db = new Database();
const interval = setInterval(
() => db.acquireHostList(),
5 * 60 * 1000 // every 5 minutes
);
// later
clearInterval(interval);
system.close();
+
+Returns an analyzers.Analyzer instance representing the Analyzer with the
+given analyzerName
.
const db = new Database();
const analyzer = db.analyzer("some-analyzer");
const info = await analyzer.get();
+
+Fetches all Analyzers visible in the database and returns an array of +analyzers.Analyzer instances for those Analyzers.
+See also Database#listAnalyzers.
+const db = new Database();
const analyzers = await db.analyzers();
// analyzers is an array of Analyzer instances
+
+Fetches availability information about the server.
+If set to true
, the method will always return false
+instead of throwing an error; otherwise false
will only be returned
+when the server responds with a 503 status code or an ArangoDB error with
+a code of 503, such as during shutdown.
const availability = await db.availability();
// availability is either "default", "readonly", or false
+
+Begins a new streaming transaction for the given collections, then returns +a transactions.Transaction instance for the transaction.
+Collections can be specified as collection names (strings) or objects
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as
+well as (in TypeScript) collections.DocumentCollection and
+collections.EdgeCollection.
Collections involved in the transaction.
+Optional
options: TransactionOptionsOptions for the transaction.
+const vertices = db.collection("vertices");
const edges = db.collection("edges");
const trx = await db.beginTransaction({
read: ["vertices"],
write: [edges] // collection instances can be passed directly
});
const start = await trx.step(() => vertices.document("a"));
const end = await trx.step(() => vertices.document("b"));
await trx.step(() => edges.save({ _from: start._id, _to: end._id }));
await trx.commit();
+
+Begins a new streaming transaction for the given collections, then returns +a transactions.Transaction instance for the transaction.
+Collections can be specified as collection names (strings) or objects
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as well as
+(in TypeScript) collections.DocumentCollection and collections.EdgeCollection.
Collections that can be read from and written to +during the transaction.
+Optional
options: TransactionOptionsOptions for the transaction.
+const vertices = db.collection("vertices");
const edges = db.collection("edges");
const trx = await db.beginTransaction([
"vertices",
edges // collection instances can be passed directly
]);
const start = await trx.step(() => vertices.document("a"));
const end = await trx.step(() => vertices.document("b"));
await trx.step(() => edges.save({ _from: start._id, _to: end._id }));
await trx.commit();
+
+Begins a new streaming transaction for the given collections, then returns +a transactions.Transaction instance for the transaction.
+The Collection can be specified as a collection name (string) or an object
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as well as
+(in TypeScript) collections.DocumentCollection and collections.EdgeCollection.
A collection that can be read from and written to +during the transaction.
+Optional
options: TransactionOptionsOptions for the transaction.
+const vertices = db.collection("vertices");
const start = vertices.document("a");
const end = vertices.document("b");
const edges = db.collection("edges");
const trx = await db.beginTransaction(
edges // collection instances can be passed directly
);
await trx.step(() => edges.save({ _from: start._id, _to: end._id }));
await trx.commit();
+
+Clears the list of recent slow queries.
+See also Database#listSlowQueries.
+const db = new Database();
await db.clearSlowQueries();
// Slow query list is now cleared
+
+Clears the given ArangoDB user's access level for the database, or the +given collection in the given database.
+Name of the ArangoDB user to clear the access level for.
+const db = new Database();
await db.clearUserAccessLevel("steve");
// The access level of the user "steve" has been cleared for the current
// database.
+
+const db = new Database();
await db.clearUserAccessLevel("steve", { database: "staging" });
// The access level of the user "steve" has been cleared for the "staging"
// database.
+
+const db = new Database();
await db.clearUserAccessLevel("steve", { collection: "pokemons" });
// The access level of the user "steve" has been cleared for the
// "pokemons" collection in the current database.
+
+const db = new Database();
await db.clearUserAccessLevel("steve", {
database: "staging",
collection: "pokemons"
});
// The access level of the user "steve" has been cleared for the
// "pokemons" collection in the "staging" database.
+
+const db = new Database();
const staging = db.database("staging");
await db.clearUserAccessLevel("steve", { database: staging });
// The access level of the user "steve" has been cleared for the "staging"
// database.
+
+const db = new Database();
const staging = db.database("staging");
await db.clearUserAccessLevel("steve", {
collection: staging.collection("pokemons")
});
// The access level of the user "steve" has been cleared for the
// "pokemons" collection in database "staging".
+
+Closes all active connections of this database instance.
+Can be used to clean up idling connections during longer periods of +inactivity.
+Note: This method currently has no effect in the browser version of +arangojs.
+const db = new Database();
const sessions = db.collection("sessions");
// Clean up expired sessions once per hour
setInterval(async () => {
await db.query(aql`
FOR session IN ${sessions}
FILTER session.expires < DATE_NOW()
REMOVE session IN ${sessions}
`);
// Making sure to close the connections because they're no longer used
system.close();
}, 1000 * 60 * 60);
+
+Returns a Collection
instance for the given collection name.
In TypeScript the collection implements both the +collections.DocumentCollection and collections.EdgeCollection +interfaces and can be cast to either type to enforce a stricter API.
+Type to represent document contents returned by +the server (including computed properties).
+Type to represent document contents passed when +inserting or replacing documents (without computed properties).
+Name of the edge collection.
+const db = new Database();
const collection = db.collection("potatoes");
+
+interface Person {
name: string;
}
const db = new Database();
const persons = db.collection<Person>("persons");
+
+interface Person {
name: string;
}
interface Friend {
startDate: number;
endDate?: number;
}
const db = new Database();
const documents = db.collection("persons") as DocumentCollection<Person>;
const edges = db.collection("friends") as EdgeCollection<Friend>;
+
+Fetches all collections from the database and returns an array of
+Collection
instances.
In TypeScript these instances implement both the +collections.DocumentCollection and collections.EdgeCollection +interfaces and can be cast to either type to enforce a stricter API.
+See also Database#listCollections.
+Whether system collections should be excluded.
+const db = new Database();
const collections = await db.collections();
// collections is an array of DocumentCollection and EdgeCollection
// instances not including system collections
+
+const db = new Database();
const collections = await db.collections(false);
// collections is an array of DocumentCollection and EdgeCollection
// instances including system collections
+
+Writes all locally available services to the database and updates any +service bundles missing in the database.
+If set to true
, outdated services will also be
+committed. This can be used to solve some consistency problems when
+service bundles are missing in the database or were deleted manually.
await db.commitLocalServiceState();
// all services available on the coordinator have been written to the db
+
+await db.commitLocalServiceState(true);
// all service conflicts have been resolved in favor of this coordinator
+
+Computes a set of move shard operations to rebalance the cluster.
+const db = new Database();
const result = await db.computerClusterRebalance({
moveLeaders: true,
moveFollowers: true
});
if (result.moves.length) {
await db.executeClusterRebalance(result.moves);
}
+
+Creates a new Analyzer with the given analyzerName
and options
, then
+returns an analyzers.Analyzer instance for the new Analyzer.
Name of the Analyzer.
+An object defining the properties of the Analyzer.
+const db = new Database();
const analyzer = await db.createAnalyzer("potatoes", { type: "identity" });
// the identity Analyzer "potatoes" now exists
+
+Creates a new collection with the given collectionName
and options
,
+then returns a collections.DocumentCollection instance for the new collection.
Type to represent document contents returned by +the server (including computed properties).
+Type to represent document contents passed when +inserting or replacing documents (without computed properties).
+Name of the new collection.
+Optional
options: CollectionPropertiesOptions & { Options for creating the collection.
+const db = new Database();
const documents = db.createCollection("persons");
+
+interface Person {
name: string;
}
const db = new Database();
const documents = db.createCollection<Person>("persons");
+
+Creates a new edge collection with the given collectionName
and
+options
, then returns an collections.EdgeCollection instance for the new
+edge collection.
Type to represent edge document contents returned +by the server (including computed properties).
+Type to represent edge document contents passed +when inserting or replacing documents (without computed properties).
+Name of the new collection.
+Options for creating the collection.
+const db = new Database();
const edges = db.createCollection("friends", {
type: CollectionType.EDGE_COLLECTION
});
+
+interface Friend {
startDate: number;
endDate?: number;
}
const db = new Database();
const edges = db.createCollection<Friend>("friends", {
type: CollectionType.EDGE_COLLECTION
});
+
+Creates a new database with the given databaseName
with the given
+options
and returns a Database
instance for that database.
Name of the database to create.
+Optional
options: CreateDatabaseOptionsOptions for creating the database.
+const db = new Database();
const info = await db.createDatabase("mydb", {
users: [{ username: "root" }]
});
// the database has been created
+
+Creates a new database with the given databaseName
with the given
+users
and returns a Database
instance for that database.
Name of the database to create.
+Database users to create with the database.
+const db = new Database();
const info = await db.createDatabase("mydb", [{ username: "root" }]);
// the database has been created
+
+Creates a new edge collection with the given collectionName
and
+options
, then returns an collections.EdgeCollection instance for the new
+edge collection.
This is a convenience method for calling Database#createCollection
+with options.type
set to EDGE_COLLECTION
.
Type to represent edge document contents returned +by the server (including computed properties).
+Type to represent edge document contents passed +when inserting or replacing documents (without computed properties).
+Name of the new collection.
+Optional
options: CreateCollectionOptionsOptions for creating the collection.
+const db = new Database();
const edges = db.createEdgeCollection("friends");
+
+interface Friend {
startDate: number;
endDate?: number;
}
const db = new Database();
const edges = db.createEdgeCollection<Friend>("friends");
+
+Creates a graph with the given graphName
and edgeDefinitions
, then
+returns a graphs.Graph instance for the new graph.
Name of the graph to be created.
+An array of edge definitions.
+Optional
options: CreateGraphOptionsAn object defining the properties of the graph.
+(Enterprise Edition only.) Creates a hot backup of the entire ArangoDB +deployment including all databases, collections, etc.
+Returns an object describing the backup result.
+Options for creating the backup.
+const info = await db.createHotBackup();
// a hot backup has been created
+
+Creates an async job by executing the given callback function. The first +database request performed by the callback will be marked for asynchronous +execution and its result will be made available as an async job.
+Returns a jobs.Job instance that can be used to retrieve the result +of the callback function once the request has been executed.
+const db = new Database();
const job = await db.createJob(() => db.collections());
while (!job.isLoaded) {
await timeout(1000);
await job.load();
}
// job.result is a list of Collection instances
+
+Creates a new ArangoDB user with the given password.
+Name of the ArangoDB user to create.
+Password of the new ArangoDB user.
+const db = new Database();
const user = await db.createUser("steve", "hunter2");
// The user "steve" has been created
+
+Creates a new ArangoDB user with the given options.
+Name of the ArangoDB user to create.
+Additional options for creating the ArangoDB user.
+const db = new Database();
const user = await db.createUser("steve", { passwd: "hunter2" });
// The user "steve" has been created
+
+Creates an AQL user function with the given name and code if it does +not already exist or replaces it if a function with the same name already +existed.
+A valid AQL function name. The function name must consist +of at least two alphanumeric identifiers separated with double colons.
+A string evaluating to a JavaScript function (not a +JavaScript function object).
+If set to true
, the function is expected to
+always return the same result for equivalent inputs. This option currently
+has no effect but may allow for optimizations in the future.
const db = new Database();
await db.createUserFunction(
"ACME::ACCOUNTING::CALCULATE_VAT",
"(price) => price * 0.19"
);
// Use the new function in an AQL query with template handler:
const cursor = await db.query(aql`
FOR product IN products
RETURN MERGE(
{ vat: ACME::ACCOUNTING::CALCULATE_VAT(product.price) },
product
)
`);
// cursor is a cursor for the query result
+
+Creates a new View with the given viewName
and options
, then returns a
+views.View instance for the new View.
Name of the View.
+An object defining the properties of the View.
+const db = new Database();
const view = await db.createView("potatoes", { type: "arangosearch" });
// the ArangoSearch View "potatoes" now exists
+
+Creates a new Database
instance for the given databaseName
that
+shares this database's connection pool.
See also :constructor.
+Name of the database.
+const systemDb = new Database();
const myDb = systemDb.database("my_database");
+
+Fetches all databases from the server and returns an array of Database
+instances for those databases.
See also Database#listDatabases and +Database#userDatabases.
+const db = new Database();
const names = await db.databases();
// databases is an array of databases
+
+Deletes the results of all completed async jobs created before the given +threshold.
+The expiration timestamp in milliseconds.
+const db = new Database();
const ONE_WEEK = 7 * 24 * 60 * 60 * 1000;
await db.deleteExpiredJobResults(Date.now() - ONE_WEEK);
// all job results older than a week have been deleted
+
+Retrieves a zip bundle containing the service files.
+Returns a Buffer
in node.js or Blob
in the browser.
The service's mount point, relative to the database.
+const db = new Database();
const serviceBundle = await db.downloadService("/my-foxx");
+
+Deletes the database with the given databaseName
from the server.
Name of the database to delete.
+const db = new Database();
await db.dropDatabase("mydb");
// database "mydb" no longer exists
+
+Deletes the AQL user function with the given name from the database.
+The name of the user function to drop.
+If set to true
, all functions with a name starting with
+name
will be deleted, otherwise only the function with the exact name
+will be deleted.
const db = new Database();
await db.dropUserFunction("ACME::ACCOUNTING::CALCULATE_VAT");
// the function no longer exists
+
+Fetches storage engine information from the ArangoDB server.
+const db = new Database();
const engine = await db.engine();
// the engine object contains the storage engine information, e.g.
// name: name of the storage engine
+
+Executes the given cluster move shard operations.
+const db = new Database();
const result = await db.computerClusterRebalance({
moveLeaders: true,
moveFollowers: true
});
if (result.moves.length) {
await db.executeClusterRebalance(result.moves);
}
+
+Performs a server-side JavaScript transaction and returns its return +value.
+Collections can be specified as collection names (strings) or objects
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as well as
+(in TypeScript) collections.DocumentCollection and collections.EdgeCollection.
Note: The action
function will be evaluated and executed on the
+server inside ArangoDB's embedded JavaScript environment and can not
+access any values other than those passed via the params
option.
See the official ArangoDB documentation for
+the JavaScript @arangodb
module
+for information about accessing the database from within ArangoDB's
+server-side JavaScript environment.
Collections involved in the transaction.
+A string evaluating to a JavaScript function to be +executed on the server.
+Optional
options: TransactionOptions & { Options for the transaction. If options.allowImplicit
+is specified, it will be used if collections.allowImplicit
was not
+specified.
const db = new Database();
const action = `
function(params) {
// This code will be executed inside ArangoDB!
const { query } = require("@arangodb");
return query\`
FOR user IN _users
FILTER user.age > ${params.age}
RETURN u.user
\`.toArray();
}
`);
const result = await db.executeTransaction({
read: ["_users"]
}, action, {
params: { age: 12 }
});
// result contains the return value of the action
+
+Performs a server-side transaction and returns its return value.
+Collections can be specified as collection names (strings) or objects
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as well as
+(in TypeScript) collections.DocumentCollection and collections.EdgeCollection.
Note: The action
function will be evaluated and executed on the
+server inside ArangoDB's embedded JavaScript environment and can not
+access any values other than those passed via the params
option.
+See the official ArangoDB documentation for
+the JavaScript @arangodb
module
+for information about accessing the database from within ArangoDB's
+server-side JavaScript environment.
Collections that can be read from and written to +during the transaction.
+A string evaluating to a JavaScript function to be +executed on the server.
+Optional
options: TransactionOptions & { Options for the transaction.
+const db = new Database();
const action = `
function(params) {
// This code will be executed inside ArangoDB!
const { query } = require("@arangodb");
return query\`
FOR user IN _users
FILTER user.age > ${params.age}
RETURN u.user
\`.toArray();
}
`);
const result = await db.executeTransaction(["_users"], action, {
params: { age: 12 }
});
// result contains the return value of the action
+
+Performs a server-side transaction and returns its return value.
+The Collection can be specified as a collection name (string) or an object
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as well as
+(in TypeScript) collections.DocumentCollection and collections.EdgeCollection.
Note: The action
function will be evaluated and executed on the
+server inside ArangoDB's embedded JavaScript environment and can not
+access any values other than those passed via the params
option.
+See the official ArangoDB documentation for
+the JavaScript @arangodb
module
+for information about accessing the database from within ArangoDB's
+server-side JavaScript environment.
A collection that can be read from and written to +during the transaction.
+A string evaluating to a JavaScript function to be +executed on the server.
+Optional
options: TransactionOptions & { Options for the transaction.
+const db = new Database();
const action = `
function(params) {
// This code will be executed inside ArangoDB!
const { query } = require("@arangodb");
return query\`
FOR user IN _users
FILTER user.age > ${params.age}
RETURN u.user
\`.toArray();
}
`);
const result = await db.executeTransaction("_users", action, {
params: { age: 12 }
});
// result contains the return value of the action
+
+Explains a database query using the given query
.
See the aql.aql template string handler for information about how +to create a query string without manually defining bind parameters nor +having to worry about escaping variables.
+An object containing an AQL query string and bind +parameters, e.g. the object returned from an aql.aql template string.
+Optional
options: ExplainOptions & { Options for explaining the query.
+const db = new Database();
const collection = db.collection("some-collection");
const explanation = await db.explain(aql`
FOR doc IN ${collection}
FILTER doc.flavor == "strawberry"
RETURN doc._key
`);
+
+Explains a database query using the given query
.
See the aql.aql template string handler for information about how +to create a query string without manually defining bind parameters nor +having to worry about escaping variables.
+An object containing an AQL query string and bind +parameters, e.g. the object returned from an aql.aql template string.
+Optional
options: ExplainOptions & { Options for explaining the query.
+const db = new Database();
const collection = db.collection("some-collection");
const explanation = await db.explain(
aql`
FOR doc IN ${collection}
FILTER doc.flavor == "strawberry"
RETURN doc._key
`,
{ allPlans: true }
);
+
+Explains a database query using the given query
and bindVars
.
See the aql.aql template string handler for a safer and easier +alternative to passing strings directly.
+An AQL query string.
+Optional
bindVars: Record<string, any>An object defining bind parameters for the query.
+Optional
options: ExplainOptions & { Options for explaining the query.
+const db = new Database();
const collection = db.collection("some-collection");
const explanation = await db.explain(
`
FOR doc IN @@collection
FILTER doc.flavor == "strawberry"
RETURN doc._key
`,
{ "@collection": collection.name }
);
+
+Explains a database query using the given query
and bindVars
.
See the aql.aql template string handler for a safer and easier +alternative to passing strings directly.
+An AQL query string.
+Optional
bindVars: Record<string, any>An object defining bind parameters for the query.
+Optional
options: ExplainOptions & { Options for explaining the query.
+const db = new Database();
const collection = db.collection("some-collection");
const explanation = await db.explain(
`
FOR doc IN @@collection
FILTER doc.flavor == "strawberry"
RETURN doc._key
`,
{ "@collection": collection.name },
{ allPlans: true }
);
+
+Fetches the database description for the active database from the server.
+const db = new Database();
const info = await db.get();
// the database exists
+
+Computes the current cluster imbalance.
+const db = new Database();
const imbalance = await db.getClusterImbalance();
+
+(Enterprise Edition only.) Retrieves a list of all locally found hot +backups.
+Optional
id: string | string[]If specified, only the backup with the given ID will be +returned.
+const backups = await db.getHotBackups();
for (const backup of backups.list) {
console.log(backup.id);
}
+
+Retrieves the log messages from the server's global log.
+Optional
options: LogEntriesOptionsOptions for retrieving the log entries.
+const log = await db.getLogEntries();
for (let i = 0; i < log.totalAmount; i++) {
console.log(`${
new Date(log.timestamp[i] * 1000).toISOString()
} - [${LogLevel[log.level[i]]}] ${log.text[i]} (#${log.lid[i]})`);
}
+
+Retrieves the server's current log level for each topic.
+const levels = await db.getLogLevel();
console.log(levels.request); // log level for incoming requests
+
+Retrieves information about a mounted service.
+The service's mount point, relative to the database.
+const db = new Database();
const info = await db.getService("/my-service");
// info contains detailed information about the service
+
+Retrieves information about the service's configuration options and their +current values.
+See also Database#replaceServiceConfiguration and +Database#updateServiceConfiguration.
+The service's mount point, relative to the database.
+Optional
minimal: falseIf set to true
, the result will only include each
+configuration option's current value. Otherwise it will include the full
+definition for each option.
const db = new Database();
const config = await db.getServiceConfiguration("/my-service");
for (const [key, option] of Object.entries(config)) {
console.log(`${option.title} (${key}): ${option.current}`);
}
+
+Retrieves information about the service's configuration options and their +current values.
+See also Database#replaceServiceConfiguration and +Database#updateServiceConfiguration.
+The service's mount point, relative to the database.
+If set to true
, the result will only include each
+configuration option's current value. Otherwise it will include the full
+definition for each option.
const db = new Database();
const config = await db.getServiceConfiguration("/my-service", true);
for (const [key, value] of Object.entries(config)) {
console.log(`${key}: ${value}`);
}
+
+Retrieves information about the service's dependencies and their current +mount points.
+See also Database#replaceServiceDependencies and +Database#updateServiceDependencies.
+The service's mount point, relative to the database.
+Optional
minimal: falseIf set to true
, the result will only include each
+dependency's current mount point. Otherwise it will include the full
+definition for each dependency.
const db = new Database();
const deps = await db.getServiceDependencies("/my-service");
for (const [key, dep] of Object.entries(deps)) {
console.log(`${dep.title} (${key}): ${dep.current}`);
}
+
+Retrieves information about the service's dependencies and their current +mount points.
+See also Database#replaceServiceDependencies and +Database#updateServiceDependencies.
+The service's mount point, relative to the database.
+If set to true
, the result will only include each
+dependency's current mount point. Otherwise it will include the full
+definition for each dependency.
const db = new Database();
const deps = await db.getServiceDependencies("/my-service", true);
for (const [key, value] of Object.entries(deps)) {
console.log(`${key}: ${value}`);
}
+
+Retrieves an Open API compatible Swagger API description object for the +service installed at the given mount point.
+The service's mount point, relative to the database.
+const db = new Database();
const spec = await db.getServiceDocumentation("/my-service");
// spec is a Swagger API description of the service
+
+Retrieves the text content of the service's README
or README.md
file.
Returns undefined
if no such file could be found.
The service's mount point, relative to the database.
+const db = new Database();
const readme = await db.getServiceReadme("/my-service");
if (readme !== undefined) console.log(readme);
else console.warn(`No README found.`)
+
+Retrieves an object mapping script names to their human readable +representations, as defined in the service manifest's "scripts" section.
+The service's mount point, relative to the database.
+const db = new Database();
const scripts = await db.getServiceScripts("/my-service");
for (const [name, title] of Object.entries(scripts)) {
console.log(`${name}: ${title}`);
}
+
+Fetches the user data of a single ArangoDB user.
+Name of the ArangoDB user to fetch.
+const db = new Database();
const user = await db.getUser("steve");
// user is the user object for the user named "steve"
+
+Fetches the given ArangoDB user's access level for the database, or the +given collection in the given database.
+Name of the ArangoDB user to fetch the access level for.
+const db = new Database();
const accessLevel = await db.getUserAccessLevel("steve");
// The access level of the user "steve" has been fetched for the current
// database.
+
+const db = new Database();
const accessLevel = await db.getUserAccessLevel("steve", {
database: "staging"
});
// The access level of the user "steve" has been fetched for the "staging"
// database.
+
+const db = new Database();
const accessLevel = await db.getUserAccessLevel("steve", {
collection: "pokemons"
});
// The access level of the user "steve" has been fetched for the
// "pokemons" collection in the current database.
+
+const db = new Database();
const accessLevel = await db.getUserAccessLevel("steve", {
database: "staging",
collection: "pokemons"
});
// The access level of the user "steve" has been fetched for the
// "pokemons" collection in the "staging" database.
+
+const db = new Database();
const staging = db.database("staging");
const accessLevel = await db.getUserAccessLevel("steve", {
database: staging
});
// The access level of the user "steve" has been fetched for the "staging"
// database.
+
+const db = new Database();
const staging = db.database("staging");
const accessLevel = await db.getUserAccessLevel("steve", {
collection: staging.collection("pokemons")
});
// The access level of the user "steve" has been fetched for the
// "pokemons" collection in database "staging".
+
+Fetches an object mapping names of databases to the access level of the +given ArangoDB user for those databases.
+Name of the ArangoDB user to fetch the access levels for.
+Optional
full: falseWhether access levels for collections should be included.
+const db = new Database();
const accessLevels = await db.getUserDatabases("steve");
for (const [databaseName, accessLevel] of Object.entries(accessLevels)) {
console.log(`${databaseName}: ${accessLevel}`);
}
+
+Fetches an object mapping names of databases to the access level of the +given ArangoDB user for those databases and the collections within each +database.
+Name of the ArangoDB user to fetch the access levels for.
+Whether access levels for collections should be included.
+const db = new Database();
const accessLevels = await db.getUserDatabases("steve", true);
for (const [databaseName, obj] of Object.entries(accessLevels)) {
console.log(`${databaseName}: ${obj.permission}`);
for (const [collectionName, accessLevel] of Object.entries(obj.collections)) {
console.log(`${databaseName}/${collectionName}: ${accessLevel}`);
}
}
+
+Returns a graphs.Graph instance representing the graph with the given
+graphName
.
Name of the graph.
+const db = new Database();
const graph = db.graph("some-graph");
+
+Fetches all graphs from the database and returns an array of graphs.Graph +instances for those graphs.
+See also Database#listGraphs.
+const db = new Database();
const graphs = await db.graphs();
// graphs is an array of Graph instances
+
+Installs a new service.
+The service's mount point, relative to the database.
+The service bundle to install.
+Options for installing the service.
+const db = new Database();
// Using a Buffer in Node.js as source
const source = new Blob([await fs.readFileSync("./my-foxx-service.zip")]);
const info = await db.installService("/hello", source);
+
+const db = new Database();
// Using a Blob in Node.js as source
const source = await fs.openAsBlob("./my-foxx-service.zip");
const info = await db.installService("/hello", source);
+
+const db = new Database();
// Using a File from a browser file input as source
const element = document.getElementById("my-file-input");
const source = element.files[0];
const info = await db.installService("/hello", source);
+
+Kills a running query with the given queryId
.
See also Database#listRunningQueries.
+The ID of a currently running query.
+const db = new Database();
const queries = await db.listRunningQueries();
await Promise.all(queries.map(
async (query) => {
if (query.state === "executing") {
await db.killQuery(query.id);
}
}
));
+
+Fetches all Analyzers visible in the database and returns an array of +Analyzer descriptions.
+See also Database#analyzers.
+const db = new Database();
const analyzers = await db.listAnalyzers();
// analyzers is an array of Analyzer descriptions
+
+Fetches all collections from the database and returns an array of +collection descriptions.
+See also Database#collections.
+Whether system collections should be excluded.
+const db = new Database();
const collections = await db.listCollections();
// collections is an array of collection descriptions
// not including system collections
+
+const db = new Database();
const collections = await db.listCollections(false);
// collections is an array of collection descriptions
// including system collections
+
+Fetches all databases from the server and returns an array of their names.
+See also Database#databases and +Database#listUserDatabases.
+const db = new Database();
const names = await db.listDatabases();
// databases is an array of database names
+
+Fetches all graphs from the database and returns an array of graph +descriptions.
+See also Database#graphs.
+const db = new Database();
const graphs = await db.listGraphs();
// graphs is an array of graph descriptions
+
+Retrieves the log messages from the server's global log.
+Optional
options: LogEntriesOptionsOptions for retrieving the log entries.
+This endpoint has been deprecated in ArangoDB 3.8. +Use Database#getLogEntries instead.
+const messages = await db.listLogMessages();
for (const m of messages) {
console.log(`${m.date} - [${m.level}] ${m.message} (#${m.id})`);
}
+
+Fetches a list of information for all currently running queries.
+See also Database#listSlowQueries and Database#killQuery.
+const db = new Database();
const queries = await db.listRunningQueries();
+
+Fetches a list of all installed service.
+Whether system services should be excluded.
+const db = new Database();
const services = await db.listServices();
+
+const db = new Database();
const services = await db.listServices(false); // all services
+
+Fetches a list of information for all recent slow queries.
+See also Database#listRunningQueries and +Database#clearSlowQueries.
+const db = new Database();
const queries = await db.listSlowQueries();
// Only works if slow query tracking is enabled
+
+Fetches all active transactions from the database and returns an array of +transaction descriptions.
+See also Database#transactions.
+const db = new Database();
const transactions = await db.listTransactions();
// transactions is an array of transaction descriptions
+
+Fetches all databases accessible to the active user from the server and +returns an array of their names.
+See also Database#userDatabases and +Database#listDatabases.
+const db = new Database();
const names = await db.listUserDatabases();
// databases is an array of database names
+
+Fetches a list of all AQL user functions registered with the database.
+const db = new Database();
const functions = await db.listUserFunctions();
const names = functions.map(fn => fn.name);
+
+Fetches all ArangoDB users visible to the authenticated user and returns +an array of user objects.
+const db = new Database();
const users = await db.listUsers();
// users is an array of user objects
+
+Fetches all Views from the database and returns an array of View +descriptions.
+See also Database#views.
+const db = new Database();
const views = await db.listViews();
// views is an array of View descriptions
+
+Validates the given database credentials and exchanges them for an +authentication token, then uses the authentication token for future +requests and returns it.
+The username to authenticate with.
+The password to authenticate with.
+const db = new Database();
await db.login("admin", "hunter2");
// with an authentication token for the "admin" user.
+
+Parses the given query and returns the result.
+See the aql.aql template string handler for information about how +to create a query string without manually defining bind parameters nor +having to worry about escaping variables.
+An AQL query string or an object containing an AQL query +string and bind parameters, e.g. the object returned from an aql.aql +template string.
+const db = new Database();
const collection = db.collection("some-collection");
const ast = await db.parse(aql`
FOR doc IN ${collection}
FILTER doc.flavor == "strawberry"
RETURN doc._key
`);
+
+aql.
+Performs a database query using the given query
, then returns a new
+cursors.Cursor instance for the result set.
See the aql.aql template string handler for information about how +to create a query string without manually defining bind parameters nor +having to worry about escaping variables.
+Note: When executing a query in a streaming transaction using the
+step
method, the resulting cursor will be bound to that transaction and
+you do not need to use the step
method to consume it.
An object containing an AQL query string and bind +parameters, e.g. the object returned from an aql.aql template string.
+Optional
options: QueryOptionsOptions for the query execution.
+const db = new Database();
const active = true;
const Users = db.collection("_users");
// Using an aql template string:
// Bind parameters are automatically extracted and arangojs collections
// are automatically passed as collection bind parameters.
const cursor = await db.query(aql`
FOR u IN ${Users}
FILTER u.authData.active == ${active}
RETURN u.user
`);
// cursor is a cursor for the query result
+
+const db = new Database();
const active = true;
const Users = db.collection("_users");
// Using an object with a regular multi-line string
const cursor = await db.query({
query: `
FOR u IN @@users
FILTER u.authData.active == @active
RETURN u.user
`,
bindVars: { active: active, "@users": Users.name }
});
+
+Performs a database query using the given query
and bindVars
, then
+returns a new cursors.Cursor instance for the result set.
See the aql.aql template string handler for a safer and easier +alternative to passing strings directly.
+Note: When executing a query in a streaming transaction using the
+step
method, the resulting cursor will be bound to that transaction and
+you do not need to use the step
method to consume it.
An AQL query string.
+Optional
bindVars: Record<string, any>An object defining bind parameters for the query.
+Optional
options: QueryOptionsOptions for the query execution.
+const db = new Database();
const active = true;
const Users = db.collection("_users");
const cursor = await db.query(
// A normal multi-line string
`
FOR u IN @@users
FILTER u.authData.active == @active
RETURN u.user
`,
{ active: active, "@users": Users.name }
);
+
+const db = new Database();
const active = true;
const Users = db.collection("_users");
const cursor = await db.query(
// An AQL literal created from a normal multi-line string
aql.literal(`
FOR u IN @@users
FILTER u.authData.active == @active
RETURN u.user
`),
{ active: active, "@users": Users.name }
);
+
+Fetches the available optimizer rules.
+const db = new Database();
const rules = await db.queryRules();
for (const rule of rules) {
console.log(rule.name);
}
+
+Fetches the query tracking properties.
+const db = new Database();
const tracking = await db.queryTracking();
console.log(tracking.enabled);
+
+Modifies the query tracking properties.
+Options for query tracking.
+const db = new Database();
// track up to 5 slow queries exceeding 5 seconds execution time
await db.setQueryTracking({
enabled: true,
trackSlowQueries: true,
maxSlowQueries: 5,
slowQueryThreshold: 5
});
+
+Computes a set of move shard operations to rebalance the cluster and +executes them.
+const db = new Database();
const result = await db.rebalanceCluster({
moveLeaders: true,
moveFollowers: true
});
// The cluster is now rebalanced.
+
+Renames the collection collectionName
to newName
.
Additionally removes any stored Collection
instance for
+collectionName
from the Database
instance's internal cache.
Note: Renaming collections may not be supported when ArangoDB is +running in a cluster configuration.
+Current name of the collection.
+The new name of the collection.
+Renames the view viewName
to newName
.
Additionally removes any stored views.View instance for viewName
from
+the Database
instance's internal cache.
Note: Renaming views may not be supported when ArangoDB is running in +a cluster configuration.
+Current name of the view.
+The new name of the view.
+Attempts to renew the authentication token passed to Database#useBearerAuth +or returned and used by Database#login. If a new authentication +token is issued, it will be used for future requests and returned.
+const db = new Database();
await db.login("admin", "hunter2");
// ... later ...
const newToken = await db.renewAuthToken();
if (!newToken) // no new token issued
+
+Replaces an existing service with a new service by completely removing the +old service and installing a new service at the same mount point.
+The service's mount point, relative to the database.
+The service bundle to install.
+Options for replacing the service.
+const db = new Database();
// Using a Buffer in Node.js as source
const source = new Blob([await fs.readFileSync("./my-foxx-service.zip")]);
const info = await db.replaceService("/hello", source);
+
+const db = new Database();
// Using a Blob in Node.js as source
const source = await fs.openAsBlob("./my-foxx-service.zip");
const info = await db.replaceService("/hello", source);
+
+const db = new Database();
// Using a File from a browser file input as source
const element = document.getElementById("my-file-input");
const source = element.files[0];
const info = await db.replaceService("/hello", source);
+
+Replaces the configuration of the given service, discarding any existing +values for options not specified.
+See also Database#updateServiceConfiguration and +Database#getServiceConfiguration.
+The service's mount point, relative to the database.
+An object mapping configuration option names to values.
+Optional
minimal: falseIf set to true
, the result will only include each
+configuration option's current value and warning (if any).
+Otherwise it will include the full definition for each option.
const db = new Database();
const config = { currency: "USD", locale: "en-US" };
const info = await db.replaceServiceConfiguration("/my-service", config);
for (const [key, option] of Object.entries(info)) {
console.log(`${option.title} (${key}): ${option.value}`);
if (option.warning) console.warn(`Warning: ${option.warning}`);
}
+
+Replaces the configuration of the given service, discarding any existing +values for options not specified.
+See also Database#updateServiceConfiguration and +Database#getServiceConfiguration.
+The service's mount point, relative to the database.
+An object mapping configuration option names to values.
+If set to true
, the result will only include each
+configuration option's current value and warning (if any).
+Otherwise it will include the full definition for each option.
const db = new Database();
const config = { currency: "USD", locale: "en-US" };
const info = await db.replaceServiceConfiguration("/my-service", config);
for (const [key, value] of Object.entries(info.values)) {
console.log(`${key}: ${value}`);
if (info.warnings[key]) console.warn(`Warning: ${info.warnings[key]}`);
}
+
+Replaces the dependencies of the given service, discarding any existing +mount points for dependencies not specified.
+See also Database#updateServiceDependencies and +Database#getServiceDependencies.
+The service's mount point, relative to the database.
+Optional
minimal: falseIf set to true
, the result will only include each
+dependency's current mount point. Otherwise it will include the full
+definition for each dependency.
const db = new Database();
const deps = { mailer: "/mailer-api", auth: "/remote-auth" };
const info = await db.replaceServiceDependencies("/my-service", deps);
for (const [key, dep] of Object.entries(info)) {
console.log(`${dep.title} (${key}): ${dep.current}`);
if (dep.warning) console.warn(`Warning: ${dep.warning}`);
}
+
+Replaces the dependencies of the given service, discarding any existing +mount points for dependencies not specified.
+See also Database#updateServiceDependencies and +Database#getServiceDependencies.
+The service's mount point, relative to the database.
+If set to true
, the result will only include each
+dependency's current mount point. Otherwise it will include the full
+definition for each dependency.
const db = new Database();
const deps = { mailer: "/mailer-api", auth: "/remote-auth" };
const info = await db.replaceServiceDependencies(
"/my-service",
deps,
true
);
for (const [key, value] of Object.entries(info)) {
console.log(`${key}: ${value}`);
if (info.warnings[key]) console.warn(`Warning: ${info.warnings[key]}`);
}
+
+Replaces the ArangoDB user's option with the new options.
+Name of the ArangoDB user to modify.
+New options to replace the user's existing options.
+const db = new Database();
const user = await db.replaceUser("steve", { passwd: "", active: false });
// The user "steve" has been set to inactive with an empty password
+
+(Enteprise Edition only.) Restores a consistent local hot backup.
+Returns the directory path of the restored backup.
+The ID of the backup to restore.
+await db.restoreHotBackup("2023-09-19T15.38.21Z_example");
// the backup has been restored
+
+Returns a new routes.Route instance for the given path (relative to the +database) that can be used to perform arbitrary HTTP requests.
+Optional
path: stringThe database-relative URL of the route. Defaults to the +database API root.
+Optional
headers: Record<string, string> | HeadersDefault headers that should be sent with each request to +the route.
+const db = new Database();
const myFoxxService = db.route("my-foxx-service");
const response = await myFoxxService.post("users", {
username: "admin",
password: "hunter2"
});
// response.body is the result of
// POST /_db/_system/my-foxx-service/users
// with JSON request body '{"username": "admin", "password": "hunter2"}'
+
+Executes a service script and retrieves its result exposed as
+module.exports
(if any).
The service's mount point, relative to the database.
+Name of the service script to execute as defined in the +service manifest.
+Optional
params: anyArbitrary value that will be exposed to the script as
+argv[0]
in the service context (e.g. module.context.argv[0]
).
+Must be serializable to JSON.
const db = new Database();
const result = await db.runServiceScript(
"/my-service",
"create-user",
{
username: "service_admin",
password: "hunter2"
}
);
+
+Runs the tests of a given service and returns the results using the +"default" reporter.
+The service's mount point, relative to the database.
+Optional
options: { Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Optional
idiomatic?: falseWhether the reporter should use "idiomatic" mode. Has no effect when +using the "default" or "suite" reporters.
+Optional
reporter?: "default"const db = new Database();
const testReport = await db.runServiceTests("/my-foxx");
+
+Runs the tests of a given service and returns the results using the +"suite" reporter, which groups the test result by test suite.
+The service's mount point, relative to the database.
+Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Optional
idiomatic?: falseWhether the reporter should use "idiomatic" mode. Has no effect when +using the "default" or "suite" reporters.
+const db = new Database();
const suiteReport = await db.runServiceTests(
"/my-foxx",
{ reporter: "suite" }
);
+
+Runs the tests of a given service and returns the results using the +"stream" reporter, which represents the results as a sequence of tuples +representing events.
+The service's mount point, relative to the database.
+Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Optional
idiomatic?: falseWhether the reporter should use "idiomatic" mode. If set to true
,
+the results will be returned as a formatted string.
const db = new Database();
const streamEvents = await db.runServiceTests(
"/my-foxx",
{ reporter: "stream" }
);
+
+Runs the tests of a given service and returns the results using the +"tap" reporter, which represents the results as an array of strings using +the "tap" format.
+The service's mount point, relative to the database.
+Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Optional
idiomatic?: falseWhether the reporter should use "idiomatic" mode. If set to true
,
+the results will be returned as a formatted string.
const db = new Database();
const tapLines = await db.runServiceTests(
"/my-foxx",
{ reporter: "tap" }
);
+
+Runs the tests of a given service and returns the results using the +"xunit" reporter, which represents the results as an XML document using +the JSONML exchange format.
+The service's mount point, relative to the database.
+Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Optional
idiomatic?: falseWhether the reporter should use "idiomatic" mode. If set to true
,
+the results will be returned as a formatted string.
const db = new Database();
const jsonML = await db.runServiceTests(
"/my-foxx",
{ reporter: "xunit" }
);
+
+Runs the tests of a given service and returns the results as a string +using the "stream" reporter in "idiomatic" mode, which represents the +results as a line-delimited JSON stream of tuples representing events.
+The service's mount point, relative to the database.
+Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Whether the reporter should use "idiomatic" mode. If set to false
,
+the results will be returned as an array of tuples instead of a
+string.
const db = new Database();
const streamReport = await db.runServiceTests(
"/my-foxx",
{ reporter: "stream", idiomatic: true }
);
+
+Runs the tests of a given service and returns the results as a string +using the "tap" reporter in "idiomatic" mode, which represents the +results using the "tap" format.
+The service's mount point, relative to the database.
+Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Whether the reporter should use "idiomatic" mode. If set to false
,
+the results will be returned as an array of strings instead of a
+single string.
const db = new Database();
const tapReport = await db.runServiceTests(
"/my-foxx",
{ reporter: "tap", idiomatic: true }
);
+
+Runs the tests of a given service and returns the results as a string +using the "xunit" reporter in "idiomatic" mode, which represents the +results as an XML document.
+The service's mount point, relative to the database.
+Options for running the tests.
+Optional
filter?: stringIf set, only tests with full names including this exact string will be +executed.
+Whether the reporter should use "idiomatic" mode. If set to false
,
+the results will be returned using the JSONML exchange format
+instead of a string.
const db = new Database();
const xml = await db.runServiceTests(
"/my-foxx",
{ reporter: "xunit", idiomatic: true }
);
+
+Sets the server's log level for each of the given topics to the given level.
+Any omitted topics will be left unchanged.
+An object mapping topic names to log levels.
+await db.setLogLevel({ request: "debug" });
// Debug information will now be logged for each request
+
+Sets the limit for the number of values of the most recently received +server-reported queue times that can be accessed using +Database#queueTime.
+Number of values to maintain.
+Enables or disables development mode for the given service.
+The service's mount point, relative to the database.
+Whether development mode should be enabled or disabled.
+const db = new Database();
await db.setServiceDevelopmentMode("/my-service", true);
// the service is now in development mode
await db.setServiceDevelopmentMode("/my-service", false);
// the service is now in production mode
+
+Sets the given ArangoDB user's access level for the database, or the +given collection in the given database.
+Name of the ArangoDB user to set the access level for.
+const db = new Database();
await db.setUserAccessLevel("steve", { grant: "rw" });
// The user "steve" now has read-write access to the current database.
+
+const db = new Database();
await db.setUserAccessLevel("steve", {
database: "staging",
grant: "rw"
});
// The user "steve" now has read-write access to the "staging" database.
+
+const db = new Database();
await db.setUserAccessLevel("steve", {
collection: "pokemons",
grant: "rw"
});
// The user "steve" now has read-write access to the "pokemons" collection
// in the current database.
+
+const db = new Database();
await db.setUserAccessLevel("steve", {
database: "staging",
collection: "pokemons",
grant: "rw"
});
// The user "steve" now has read-write access to the "pokemons" collection
// in the "staging" database.
+
+const db = new Database();
const staging = db.database("staging");
await db.setUserAccessLevel("steve", {
database: staging,
grant: "rw"
});
// The user "steve" now has read-write access to the "staging" database.
+
+const db = new Database();
const staging = db.database("staging");
await db.setUserAccessLevel("steve", {
collection: staging.collection("pokemons"),
grant: "rw"
});
// The user "steve" now has read-write access to the "pokemons" collection
// in database "staging".
+
+Fetches information about the server status.
+const status = await db.status();
// the status object contains the ArangoDB status information, e.g.
// version: ArangoDB version number
// host: host identifier of the server
// serverInfo: detailed information about the server
+
+Fetches deployment information about the server for support purposes.
+Note that this API may reveal sensitive data about the deployment.
+Returns a transactions.Transaction instance for an existing streaming
+transaction with the given id
.
See also Database#beginTransaction.
+const trx1 = await db.beginTransaction(collections);
const id = trx1.id;
// later
const trx2 = db.transaction(id);
await trx2.commit();
+
+Fetches all active transactions from the database and returns an array of +transactions.Transaction instances for those transactions.
+See also Database#listTransactions.
+const db = new Database();
const transactions = await db.transactions();
// transactions is an array of transactions
+
+Completely removes a service from the database.
+The service's mount point, relative to the database.
+Optional
options: UninstallServiceOptionsOptions for uninstalling the service.
+const db = new Database();
await db.uninstallService("/my-foxx");
+
+Updates the configuration of the given service while maintaining any +existing values for options not specified.
+See also Database#replaceServiceConfiguration and +Database#getServiceConfiguration.
+The service's mount point, relative to the database.
+An object mapping configuration option names to values.
+Optional
minimal: falseIf set to true
, the result will only include each
+configuration option's current value and warning (if any).
+Otherwise it will include the full definition for each option.
const db = new Database();
const config = { currency: "USD", locale: "en-US" };
const info = await db.updateServiceConfiguration("/my-service", config);
for (const [key, option] of Object.entries(info)) {
console.log(`${option.title} (${key}): ${option.value}`);
if (option.warning) console.warn(`Warning: ${option.warning}`);
}
+
+Updates the configuration of the given service while maintaining any +existing values for options not specified.
+See also Database#replaceServiceConfiguration and +Database#getServiceConfiguration.
+The service's mount point, relative to the database.
+An object mapping configuration option names to values.
+If set to true
, the result will only include each
+configuration option's current value and warning (if any).
+Otherwise it will include the full definition for each option.
const db = new Database();
const config = { currency: "USD", locale: "en-US" };
const info = await db.updateServiceConfiguration("/my-service", config);
for (const [key, value] of Object.entries(info.values)) {
console.log(`${key}: ${value}`);
if (info.warnings[key]) console.warn(`Warning: ${info.warnings[key]}`);
}
+
+Updates the dependencies of the given service while maintaining any +existing mount points for dependencies not specified.
+See also Database#replaceServiceDependencies and +Database#getServiceDependencies.
+The service's mount point, relative to the database.
+Optional
minimal: falseIf set to true
, the result will only include each
+dependency's current mount point. Otherwise it will include the full
+definition for each dependency.
const db = new Database();
const deps = { mailer: "/mailer-api", auth: "/remote-auth" };
const info = await db.updateServiceDependencies("/my-service", deps);
for (const [key, dep] of Object.entries(info)) {
console.log(`${dep.title} (${key}): ${dep.current}`);
if (dep.warning) console.warn(`Warning: ${dep.warning}`);
}
+
+Updates the dependencies of the given service while maintaining any +existing mount points for dependencies not specified.
+See also Database#replaceServiceDependencies and +Database#getServiceDependencies.
+The service's mount point, relative to the database.
+If set to true
, the result will only include each
+dependency's current mount point. Otherwise it will include the full
+definition for each dependency.
const db = new Database();
const deps = { mailer: "/mailer-api", auth: "/remote-auth" };
const info = await db.updateServiceDependencies(
"/my-service",
deps,
true
);
for (const [key, value] of Object.entries(info)) {
console.log(`${key}: ${value}`);
if (info.warnings[key]) console.warn(`Warning: ${info.warnings[key]}`);
}
+
+Sets the password of a given ArangoDB user to the new value.
+Name of the ArangoDB user to change the password for.
+New password for the ArangoDB user.
+const db = new Database();
const user = await db.updateUser("steve", "hunter2");
// The user "steve" has received a new password
+
+Updates the ArangoDB user with the new options.
+Name of the ArangoDB user to modify.
+Options of the ArangoDB user to modify.
+const db = new Database();
const user = await db.updateUser("steve", { active: false });
// The user "steve" has been set to inactive
+
+Replaces an existing service with a new service while retaining the old +service's configuration and dependencies.
+The service's mount point, relative to the database.
+The service bundle to install.
+Options for upgrading the service.
+const db = new Database();
// Using a Buffer in Node.js as source
const source = new Blob([await fs.readFileSync("./my-foxx-service.zip")]);
const info = await db.upgradeService("/hello", source);
+
+const db = new Database();
// Using a Blob in Node.js as source
const source = await fs.openAsBlob("./my-foxx-service.zip");
const info = await db.upgradeService("/hello", source);
+
+const db = new Database();
// Using a File from a browser file input as source
const element = document.getElementById("my-file-input");
const source = element.files[0];
const info = await db.upgradeService("/hello", source);
+
+Updates the underlying connection's authorization
header to use Basic
+authentication with the given username
and password
, then returns
+itself.
The username to authenticate with.
+The password to authenticate with.
+const db = new Database();
db.useBasicAuth("admin", "hunter2");
// with the username "admin" and password "hunter2".
+
+Updates the underlying connection's authorization
header to use Bearer
+authentication with the given authentication token
, then returns itself.
The token to authenticate with.
+const db = new Database();
db.useBearerAuth("keyboardcat");
// The database instance now uses Bearer authentication.
+
+Fetches all databases accessible to the active user from the server and
+returns an array of Database
instances for those databases.
See also Database#listUserDatabases and +Database#databases.
+const db = new Database();
const names = await db.userDatabases();
// databases is an array of databases
+
+Fetches version information from the ArangoDB server.
+Optional
details: booleanIf set to true
, additional information about the
+ArangoDB server will be available as the details
property.
const db = new Database();
const version = await db.version();
// the version object contains the ArangoDB version information.
// license: "community" or "enterprise"
// version: ArangoDB version number
// server: description of the server
+
+Returns a views.View instance for the given viewName
.
Name of the ArangoSearch or SearchAlias View.
+const db = new Database();
const view = db.view("potatoes");
+
+Fetches all Views from the database and returns an array of +views.View instances +for the Views.
+See also Database#listViews.
+const db = new Database();
const views = await db.views();
// views is an array of ArangoSearch View instances
+
+Performs a request against every known coordinator and returns when the +request has succeeded against every coordinator or the timeout is reached.
+Note: This method is primarily intended to make database setup easier +in cluster scenarios and requires all coordinators to be known to arangojs +before the method is invoked. The method is not useful in single-server or +leader-follower replication scenarios.
+Request to perform against each known coordinator.
+Optional
timeout: numberMaximum number of milliseconds to wait for propagation.
+const db = new Database({ loadBalancingStrategy: "ROUND_ROBIN" });
await system.acquireHostList();
const analyzer = db.analyzer("my-analyzer");
await analyzer.create();
await db.waitForPropagation(
{ pathname: `/_api/analyzer/${encodeURIComponent(analyzer.name)}` },
30000
);
// Analyzer has been propagated to all coordinators and can safely be used
+
+Begins and commits a transaction using the given callback. Individual +requests that are part of the transaction need to be wrapped in the step +function passed into the callback. If the promise returned by the callback +is rejected, the transaction will be aborted.
+Collections can be specified as collection names (strings) or objects
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as
+well as (in TypeScript) collections.DocumentCollection and
+collections.EdgeCollection.
Collections involved in the transaction.
+Callback function executing the transaction steps.
+Executes the given function locally as a single step of the transaction.
+Callback function returning a promise.
+Warning: The callback function should wrap a single call of an async
+arangojs method (e.g. a method on a Collection
object of a collection
+that is involved in the transaction or the db.query
method).
+If the callback function is async, only the first promise-returning (or
+async) method call will be executed as part of the transaction. See the
+examples below for how to avoid common mistakes when using this method.
Note: Avoid defining the callback as an async function if possible +as arangojs will throw an error if the callback did not return a promise. +Async functions will return an empty promise by default, making it harder +to notice if you forgot to return something from the callback.
+Note: Although almost anything can be wrapped in a callback and passed +to this method, that does not guarantee ArangoDB can actually do it in a +transaction. Refer to the ArangoDB documentation if you are unsure whether +a given operation can be executed as part of a transaction. Generally any +modification or retrieval of data is eligible but modifications of +collections or databases are not.
+const db = new Database();
const vertices = db.collection("vertices");
const edges = db.collection("edges");
const trx = await db.beginTransaction({ write: [vertices, edges] });
// The following code will be part of the transaction
const left = await trx.step(() => vertices.save({ label: "left" }));
const right = await trx.step(() => vertices.save({ label: "right" }));
// Results from preceding actions can be used normally
await trx.step(() => edges.save({
_from: left._id,
_to: right._id,
data: "potato"
}));
// Transaction must be committed for changes to take effected
// Always call either trx.commit or trx.abort to end a transaction
await trx.commit();
+
+// BAD! If the callback is an async function it must only use await once!
await trx.step(async () => {
await collection.save(data);
await collection.save(moreData); // WRONG
});
// BAD! Callback function must use only one arangojs call!
await trx.step(() => {
return collection.save(data)
.then(() => collection.save(moreData)); // WRONG
});
// BETTER: Wrap every arangojs method call that should be part of the
// transaction in a separate `trx.step` call
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
+
+// BAD! If the callback is an async function it must not await before
// calling an arangojs method!
await trx.step(async () => {
await doSomethingElse();
return collection.save(data); // WRONG
});
// BAD! Any arangojs inside the callback must not happen inside a promise
// method!
await trx.step(() => {
return doSomethingElse()
.then(() => collection.save(data)); // WRONG
});
// BETTER: Perform any async logic needed outside the `trx.step` call
await doSomethingElse();
await trx.step(() => collection.save(data));
// OKAY: You can perform async logic in the callback after the arangojs
// method call as long as it does not involve additional arangojs method
// calls, but this makes it easy to make mistakes later
await trx.step(async () => {
await collection.save(data);
await doSomethingDifferent(); // no arangojs method calls allowed
});
+
+// BAD! The callback should not use any functions that themselves use any
// arangojs methods!
async function saveSomeData() {
await collection.save(data);
await collection.save(moreData);
}
await trx.step(() => saveSomeData()); // WRONG
// BETTER: Pass the transaction to functions that need to call arangojs
// methods inside a transaction
async function saveSomeData(trx) {
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
}
await saveSomeData(); // no `trx.step` call needed
+
+// BAD! You must wait for the promise to resolve (or await on the
// `trx.step` call) before calling `trx.step` again!
trx.step(() => collection.save(data)); // WRONG
await trx.step(() => collection.save(moreData));
// BAD! The trx.step callback can not make multiple calls to async arangojs
// methods, not even using Promise.all!
await trx.step(() => Promise.all([ // WRONG
collection.save(data),
collection.save(moreData),
]));
// BAD! Multiple `trx.step` calls can not run in parallel!
await Promise.all([ // WRONG
trx.step(() => collection.save(data)),
trx.step(() => collection.save(moreData)),
]));
// BETTER: Always call `trx.step` sequentially, one after the other
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
// OKAY: The then callback can be used if async/await is not available
trx.step(() => collection.save(data))
.then(() => trx.step(() => collection.save(moreData)));
+
+// BAD! The callback will return an empty promise that resolves before
// the inner arangojs method call has even talked to ArangoDB!
await trx.step(async () => {
collection.save(data); // WRONG
});
// BETTER: Use an arrow function so you don't forget to return
await trx.step(() => collection.save(data));
// OKAY: Remember to always return when using a function body
await trx.step(() => {
return collection.save(data); // easy to forget!
});
// OKAY: You do not have to use arrow functions but it helps
await trx.step(function () {
return collection.save(data);
});
+
+// BAD! You can not pass promises instead of a callback!
await trx.step(collection.save(data)); // WRONG
// BETTER: Wrap the code in a function and pass the function instead
await trx.step(() => collection.save(data));
+
+// WORSE: Calls to non-async arangojs methods don't need to be performed
// as part of a transaction
const collection = await trx.step(() => db.collection("my-documents"));
// BETTER: If an arangojs method is not async and doesn't return promises,
// call it without `trx.step`
const collection = db.collection("my-documents");
+
+Optional
options: TransactionOptionsOptions for the transaction.
+const vertices = db.collection("vertices");
const edges = db.collection("edges");
await db.withTransaction(
{
read: ["vertices"],
write: [edges] // collection instances can be passed directly
},
async (step) => {
const start = await step(() => vertices.document("a"));
const end = await step(() => vertices.document("b"));
await step(() => edges.save({ _from: start._id, _to: end._id }));
}
);
+
+Begins and commits a transaction using the given callback. Individual +requests that are part of the transaction need to be wrapped in the step +function passed into the callback. If the promise returned by the callback +is rejected, the transaction will be aborted.
+Collections can be specified as collection names (strings) or objects
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as well as
+(in TypeScript) collections.DocumentCollection and collections.EdgeCollection.
Collections that can be read from and written to +during the transaction.
+Callback function executing the transaction steps.
+Executes the given function locally as a single step of the transaction.
+Callback function returning a promise.
+Warning: The callback function should wrap a single call of an async
+arangojs method (e.g. a method on a Collection
object of a collection
+that is involved in the transaction or the db.query
method).
+If the callback function is async, only the first promise-returning (or
+async) method call will be executed as part of the transaction. See the
+examples below for how to avoid common mistakes when using this method.
Note: Avoid defining the callback as an async function if possible +as arangojs will throw an error if the callback did not return a promise. +Async functions will return an empty promise by default, making it harder +to notice if you forgot to return something from the callback.
+Note: Although almost anything can be wrapped in a callback and passed +to this method, that does not guarantee ArangoDB can actually do it in a +transaction. Refer to the ArangoDB documentation if you are unsure whether +a given operation can be executed as part of a transaction. Generally any +modification or retrieval of data is eligible but modifications of +collections or databases are not.
+const db = new Database();
const vertices = db.collection("vertices");
const edges = db.collection("edges");
const trx = await db.beginTransaction({ write: [vertices, edges] });
// The following code will be part of the transaction
const left = await trx.step(() => vertices.save({ label: "left" }));
const right = await trx.step(() => vertices.save({ label: "right" }));
// Results from preceding actions can be used normally
await trx.step(() => edges.save({
_from: left._id,
_to: right._id,
data: "potato"
}));
// Transaction must be committed for changes to take effected
// Always call either trx.commit or trx.abort to end a transaction
await trx.commit();
+
+// BAD! If the callback is an async function it must only use await once!
await trx.step(async () => {
await collection.save(data);
await collection.save(moreData); // WRONG
});
// BAD! Callback function must use only one arangojs call!
await trx.step(() => {
return collection.save(data)
.then(() => collection.save(moreData)); // WRONG
});
// BETTER: Wrap every arangojs method call that should be part of the
// transaction in a separate `trx.step` call
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
+
+// BAD! If the callback is an async function it must not await before
// calling an arangojs method!
await trx.step(async () => {
await doSomethingElse();
return collection.save(data); // WRONG
});
// BAD! Any arangojs inside the callback must not happen inside a promise
// method!
await trx.step(() => {
return doSomethingElse()
.then(() => collection.save(data)); // WRONG
});
// BETTER: Perform any async logic needed outside the `trx.step` call
await doSomethingElse();
await trx.step(() => collection.save(data));
// OKAY: You can perform async logic in the callback after the arangojs
// method call as long as it does not involve additional arangojs method
// calls, but this makes it easy to make mistakes later
await trx.step(async () => {
await collection.save(data);
await doSomethingDifferent(); // no arangojs method calls allowed
});
+
+// BAD! The callback should not use any functions that themselves use any
// arangojs methods!
async function saveSomeData() {
await collection.save(data);
await collection.save(moreData);
}
await trx.step(() => saveSomeData()); // WRONG
// BETTER: Pass the transaction to functions that need to call arangojs
// methods inside a transaction
async function saveSomeData(trx) {
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
}
await saveSomeData(); // no `trx.step` call needed
+
+// BAD! You must wait for the promise to resolve (or await on the
// `trx.step` call) before calling `trx.step` again!
trx.step(() => collection.save(data)); // WRONG
await trx.step(() => collection.save(moreData));
// BAD! The trx.step callback can not make multiple calls to async arangojs
// methods, not even using Promise.all!
await trx.step(() => Promise.all([ // WRONG
collection.save(data),
collection.save(moreData),
]));
// BAD! Multiple `trx.step` calls can not run in parallel!
await Promise.all([ // WRONG
trx.step(() => collection.save(data)),
trx.step(() => collection.save(moreData)),
]));
// BETTER: Always call `trx.step` sequentially, one after the other
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
// OKAY: The then callback can be used if async/await is not available
trx.step(() => collection.save(data))
.then(() => trx.step(() => collection.save(moreData)));
+
+// BAD! The callback will return an empty promise that resolves before
// the inner arangojs method call has even talked to ArangoDB!
await trx.step(async () => {
collection.save(data); // WRONG
});
// BETTER: Use an arrow function so you don't forget to return
await trx.step(() => collection.save(data));
// OKAY: Remember to always return when using a function body
await trx.step(() => {
return collection.save(data); // easy to forget!
});
// OKAY: You do not have to use arrow functions but it helps
await trx.step(function () {
return collection.save(data);
});
+
+// BAD! You can not pass promises instead of a callback!
await trx.step(collection.save(data)); // WRONG
// BETTER: Wrap the code in a function and pass the function instead
await trx.step(() => collection.save(data));
+
+// WORSE: Calls to non-async arangojs methods don't need to be performed
// as part of a transaction
const collection = await trx.step(() => db.collection("my-documents"));
// BETTER: If an arangojs method is not async and doesn't return promises,
// call it without `trx.step`
const collection = db.collection("my-documents");
+
+Optional
options: TransactionOptionsOptions for the transaction.
+const vertices = db.collection("vertices");
const edges = db.collection("edges");
await db.withTransaction(
[
"vertices",
edges, // collection instances can be passed directly
],
async (step) => {
const start = await step(() => vertices.document("a"));
const end = await step(() => vertices.document("b"));
await step(() => edges.save({ _from: start._id, _to: end._id }));
}
);
+
+Begins and commits a transaction using the given callback. Individual +requests that are part of the transaction need to be wrapped in the step +function passed into the callback. If the promise returned by the callback +is rejected, the transaction will be aborted.
+The Collection can be specified as a collection name (string) or an object
+implementing the collections.ArangoCollection interface: Collection
,
+graphs.GraphVertexCollection, graphs.GraphEdgeCollection as well as
+(in TypeScript) collections.DocumentCollection and collections.EdgeCollection.
A collection that can be read from and written to +during the transaction.
+Callback function executing the transaction steps.
+Executes the given function locally as a single step of the transaction.
+Callback function returning a promise.
+Warning: The callback function should wrap a single call of an async
+arangojs method (e.g. a method on a Collection
object of a collection
+that is involved in the transaction or the db.query
method).
+If the callback function is async, only the first promise-returning (or
+async) method call will be executed as part of the transaction. See the
+examples below for how to avoid common mistakes when using this method.
Note: Avoid defining the callback as an async function if possible +as arangojs will throw an error if the callback did not return a promise. +Async functions will return an empty promise by default, making it harder +to notice if you forgot to return something from the callback.
+Note: Although almost anything can be wrapped in a callback and passed +to this method, that does not guarantee ArangoDB can actually do it in a +transaction. Refer to the ArangoDB documentation if you are unsure whether +a given operation can be executed as part of a transaction. Generally any +modification or retrieval of data is eligible but modifications of +collections or databases are not.
+const db = new Database();
const vertices = db.collection("vertices");
const edges = db.collection("edges");
const trx = await db.beginTransaction({ write: [vertices, edges] });
// The following code will be part of the transaction
const left = await trx.step(() => vertices.save({ label: "left" }));
const right = await trx.step(() => vertices.save({ label: "right" }));
// Results from preceding actions can be used normally
await trx.step(() => edges.save({
_from: left._id,
_to: right._id,
data: "potato"
}));
// Transaction must be committed for changes to take effected
// Always call either trx.commit or trx.abort to end a transaction
await trx.commit();
+
+// BAD! If the callback is an async function it must only use await once!
await trx.step(async () => {
await collection.save(data);
await collection.save(moreData); // WRONG
});
// BAD! Callback function must use only one arangojs call!
await trx.step(() => {
return collection.save(data)
.then(() => collection.save(moreData)); // WRONG
});
// BETTER: Wrap every arangojs method call that should be part of the
// transaction in a separate `trx.step` call
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
+
+// BAD! If the callback is an async function it must not await before
// calling an arangojs method!
await trx.step(async () => {
await doSomethingElse();
return collection.save(data); // WRONG
});
// BAD! Any arangojs inside the callback must not happen inside a promise
// method!
await trx.step(() => {
return doSomethingElse()
.then(() => collection.save(data)); // WRONG
});
// BETTER: Perform any async logic needed outside the `trx.step` call
await doSomethingElse();
await trx.step(() => collection.save(data));
// OKAY: You can perform async logic in the callback after the arangojs
// method call as long as it does not involve additional arangojs method
// calls, but this makes it easy to make mistakes later
await trx.step(async () => {
await collection.save(data);
await doSomethingDifferent(); // no arangojs method calls allowed
});
+
+// BAD! The callback should not use any functions that themselves use any
// arangojs methods!
async function saveSomeData() {
await collection.save(data);
await collection.save(moreData);
}
await trx.step(() => saveSomeData()); // WRONG
// BETTER: Pass the transaction to functions that need to call arangojs
// methods inside a transaction
async function saveSomeData(trx) {
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
}
await saveSomeData(); // no `trx.step` call needed
+
+// BAD! You must wait for the promise to resolve (or await on the
// `trx.step` call) before calling `trx.step` again!
trx.step(() => collection.save(data)); // WRONG
await trx.step(() => collection.save(moreData));
// BAD! The trx.step callback can not make multiple calls to async arangojs
// methods, not even using Promise.all!
await trx.step(() => Promise.all([ // WRONG
collection.save(data),
collection.save(moreData),
]));
// BAD! Multiple `trx.step` calls can not run in parallel!
await Promise.all([ // WRONG
trx.step(() => collection.save(data)),
trx.step(() => collection.save(moreData)),
]));
// BETTER: Always call `trx.step` sequentially, one after the other
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
// OKAY: The then callback can be used if async/await is not available
trx.step(() => collection.save(data))
.then(() => trx.step(() => collection.save(moreData)));
+
+// BAD! The callback will return an empty promise that resolves before
// the inner arangojs method call has even talked to ArangoDB!
await trx.step(async () => {
collection.save(data); // WRONG
});
// BETTER: Use an arrow function so you don't forget to return
await trx.step(() => collection.save(data));
// OKAY: Remember to always return when using a function body
await trx.step(() => {
return collection.save(data); // easy to forget!
});
// OKAY: You do not have to use arrow functions but it helps
await trx.step(function () {
return collection.save(data);
});
+
+// BAD! You can not pass promises instead of a callback!
await trx.step(collection.save(data)); // WRONG
// BETTER: Wrap the code in a function and pass the function instead
await trx.step(() => collection.save(data));
+
+// WORSE: Calls to non-async arangojs methods don't need to be performed
// as part of a transaction
const collection = await trx.step(() => db.collection("my-documents"));
// BETTER: If an arangojs method is not async and doesn't return promises,
// call it without `trx.step`
const collection = db.collection("my-documents");
+
+Optional
options: TransactionOptionsOptions for the transaction.
+const vertices = db.collection("vertices");
const start = vertices.document("a");
const end = vertices.document("b");
const edges = db.collection("edges");
await db.withTransaction(
edges, // collection instances can be passed directly
async (step) => {
await step(() => edges.save({ _from: start._id, _to: end._id }));
}
);
+
+Represents an error returned by ArangoDB.
+Creates a new ArangoError
from an ArangoDB error response.
Optional
cause?: ErrorOptional
isHTTP status code included in the server error response object.
+ArangoDB error code.
+ +Static
Optional
prepareOptional override for formatting stack traces
+Static
stackError message accompanying the error code.
+Fetch request object.
+Server response object.
+Static
captureRepresents an error from a failed fetch request.
+The root cause is often extremely difficult to determine.
+Optional
cause?: ErrorOptional
isIndicates whether the request that caused this error can be safely retried.
+Fetch request object.
+Static
Optional
prepareOptional override for formatting stack traces
+Static
stackStatic
captureRepresents a plain HTTP error response.
+HTTP status code of the server response.
+Indicates whether the request that caused this error can be safely retried.
+Fetch request object.
+Server response object.
+Static
Optional
prepareOptional override for formatting stack traces
+Static
stackStatic
captureRepresents a network error or an error encountered while performing a network request.
+Optional
cause?: ErrorOptional
isIndicates whether the request that caused this error can be safely retried.
+Fetch request object.
+Static
Optional
prepareOptional override for formatting stack traces
+Static
stackStatic
captureRepresents an error from a deliberate timeout encountered while waiting +for propagation.
+Optional
message: stringOptional
cause?: ErrorStatic
Optional
prepareOptional override for formatting stack traces
+Static
stackStatic
captureRepresents an error from a request that was aborted.
+Optional
cause?: ErrorOptional
isIndicates whether the request that caused this error can be safely retried.
+Fetch request object.
+Static
Optional
prepareOptional override for formatting stack traces
+Static
stackStatic
captureRepresents an error from a deliberate timeout encountered while waiting +for a server response.
+Optional
cause?: ErrorOptional
isIndicates whether the request that caused this error can be safely retried.
+Fetch request object.
+Static
Optional
prepareOptional override for formatting stack traces
+Static
stackStatic
captureRepresents a graph in a databases.Database.
+Indicates that this object represents an ArangoDB Graph.
+Name of the graph.
+Adds an edge definition to this graph.
+Definition of a relation in this graph.
+const db = new Database();
const graph = db.graph("some-graph");
await graph.addEdgeDefinition({
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
});
// The edge definition has been added to the graph
+
+Adds the given collection to this graph as a vertex collection.
+Collection to add to the graph.
+const db = new Database();
const graph = db.graph("some-graph");
await graph.addVertexCollection("more-vertices");
// The collection "more-vertices" has been added to the graph
const extra = db.collection("extra-vertices");
await graph.addVertexCollection(extra);
// The collection "extra-vertices" has been added to the graph
+
+Creates a graph with the given edgeDefinitions
and options
for this
+graph's name.
Definitions for the relations of the graph.
+Options for creating the graph.
+const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
// graph now exists
+
+Deletes the graph from the database.
+If set to true
, the collections associated with
+the graph will also be deleted.
const db = new Database();
const graph = db.graph("some-graph");
await graph.drop();
// the graph "some-graph" no longer exists
+
+Returns a GraphEdgeCollection instance for the given collection +name representing the collection in this graph.
+Name of the edge collection.
+const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
const graphEdgeCollection = graph.edgeCollection("edges");
// Access the underlying EdgeCollection API:
const edgeCollection = graphEdgeCollection.collection;
+
+Fetches all edge collections of this graph from the database and returns +an array of GraphEdgeCollection instances.
+See also Graph#listEdgeCollections.
+const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
const graphEdgeCollections = await graph.edgeCollections();
for (const collection of graphEdgeCollection) {
console.log(collection.name);
// "edges"
}
+
+Retrieves general information about the graph.
+const db = new Database();
const graph = db.graph("some-graph");
const data = await graph.get();
// data contains general information about the graph
+
+Fetches all edge collections of this graph from the database and returns +an array of their names.
+See also Graph#edgeCollections.
+const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
const edgeCollectionNames = await graph.listEdgeCollections();
// ["edges"]
+
+Fetches all vertex collections of this graph from the database and returns +an array of their names.
+See also Graph#vertexCollections.
+const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
const vertexCollectionNames = await graph.listVertexCollections();
// ["start-vertices", "end-vertices"]
+
+Removes the edge definition for the given edge collection from this graph.
+Edge collection for which to remove the definition.
+If set to true
, the collection will also be
+deleted from the database.
const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
await graph.removeEdgeDefinition("edges");
// The edge definition for "edges" has been replaced
+
+Removes the given collection from this graph as a vertex collection.
+Collection to remove from the graph.
+If set to true
, the collection will also be
+deleted from the database.
const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
await graph.removeVertexCollection("start-vertices");
// The collection "start-vertices" is no longer part of the graph.
+
+Replaces an edge definition in this graph. The existing edge definition +for the given edge collection will be overwritten.
+Definition of a relation in this graph.
+Optional
options: ReplaceEdgeDefinitionOptionsconst db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
await graph.replaceEdgeDefinition({
collection: "edges",
from: ["start-vertices"],
to: ["other-vertices"],
});
// The edge definition for "edges" has been replaced
+
+Replaces an edge definition in this graph. The existing edge definition +for the given edge collection will be overwritten.
+Edge collection for which to replace the definition.
+Definition of a relation in this graph.
+Optional
options: ReplaceEdgeDefinitionOptionsconst db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
await graph.replaceEdgeDefinition("edges", {
collection: "edges",
from: ["start-vertices"],
to: ["other-vertices"],
});
// The edge definition for "edges" has been replaced
+
+Returns a GraphVertexCollection instance for the given collection +name representing the collection in this graph.
+Name of the vertex collection.
+Fetches all vertex collections of this graph from the database and returns +an array of GraphVertexCollection instances.
+See also Graph#listVertexCollections.
+const db = new Database();
const graph = db.graph("some-graph");
const info = await graph.create([
{
collection: "edges",
from: ["start-vertices"],
to: ["end-vertices"],
},
]);
const vertexCollections = await graph.vertexCollections();
for (const vertexCollection of vertexCollections) {
console.log(vertexCollection.name);
// "start-vertices"
// "end-vertices"
}
+
+Represents a collections.EdgeCollection of edges in a Graph.
+Type to represent edge document contents returned +by the server (including computed properties).
+Type to represent edge document contents passed +when inserting or replacing edge documents (without computed properties).
+A collections.EdgeCollection instance for this edge collection.
+Name of the collection.
+Retrieves the edge matching the given key or id.
+Throws an exception when passed a edge or _id
from a different
+collection, or if the edge does not exist.
Document _key
, _id
or object with either of those
+properties (e.g. a edge from this collection).
Optional
options: ReadGraphDocumentOptionsOptions for retrieving the edge.
+const graph = db.graph("some-graph");
const collection = graph.edgeCollection("friends")
try {
const edge = await collection.edge("abc123");
console.log(edge);
} catch (e: any) {
console.error("Could not find edge");
}
+
+const graph = db.graph("some-graph");
const collection = graph.edgeCollection("friends")
const edge = await collection.edge("abc123", { graceful: true });
if (edge) {
console.log(edge);
} else {
console.error("Edge does not exist");
}
+
+Retrieves the edge matching the given key or id.
+Throws an exception when passed a edge or _id
from a different
+collection, or if the edge does not exist.
Document _key
, _id
or object with either of those
+properties (e.g. a edge from this collection).
If set to true
, null
is returned instead of an
+exception being thrown if the edge does not exist.
const graph = db.graph("some-graph");
const collection = graph.edgeCollection("friends")
try {
const edge = await collection.edge("abc123", false);
console.log(edge);
} catch (e: any) {
console.error("Could not find edge");
}
+
+const graph = db.graph("some-graph");
const collection = graph.edgeCollection("friends")
const edge = await collection.edge("abc123", true);
if (edge) {
console.log(edge);
} else {
console.error("Edge does not exist");
}
+
+Checks whether a edge matching the given key or id exists in this +collection.
+Throws an exception when passed a edge or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a edge from this collection).
const graph = db.graph("some-graph");
const collection = graph.edgeCollection("friends")
const exists = await collection.edgeExists("abc123");
if (!exists) {
console.log("Edge does not exist");
}
+
+Removes an existing edge from the collection.
+Throws an exception when passed a edge or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a edge from this collection).
Optional
options: RemoveGraphDocumentOptionsOptions for removing the edge.
+const db = new Database();
const collection = db.collection("friends");
const doc = await collection.edge("musadir");
await collection.remove(doc);
// edge with key "musadir" deleted
+
+Replaces an existing edge in the collection.
+Throws an exception when passed a edge or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a edge from this collection).
Optional
options: ReplaceGraphDocumentOptionsOptions for replacing the edge.
+const db = new Database();
const collection = db.collection("friends");
await collection.save(
{
_key: "musadir",
_from: "users/rana",
_to: "users/mudasir",
active: true,
best: true
}
);
const result = await collection.replace(
"musadir",
{ active: false },
{ returnNew: true }
);
console.log(result.new.active, result.new.best); // false undefined
+
+Inserts a new edge with the given data
into the collection.
The contents of the new edge.
+Optional
options: InsertGraphDocumentOptionsOptions for inserting the edge.
+const db = new Database();
const collection = db.collection("friends");
const result = await collection.save(
{ _from: "users/rana", _to: "users/mudasir", active: false },
{ returnNew: true }
);
+
+Updates an existing edge in the collection.
+Throws an exception when passed a edge or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a edge from this collection).
Optional
options: ReplaceGraphDocumentOptionsOptions for updating the edge.
+const db = new Database();
const collection = db.collection("friends");
await collection.save(
{
_key: "musadir",
_from: "users/rana",
_to: "users/mudasir",
active: true,
best: true
}
);
const result = await collection.update(
"musadir",
{ active: false },
{ returnNew: true }
);
console.log(result.new.active, result.new.best); // false true
+
+Represents a collections.DocumentCollection of vertices in a Graph.
+Type to represent vertex document contents returned +by the server (including computed properties).
+Type to represent vertex document contents passed +when inserting or replacing vertex documents (without computed properties).
+A collections.DocumentCollection instance for this vertex collection.
+Name of the collection.
+Removes an existing vertex from the collection.
+Throws an exception when passed a vertex or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a vertex from this collection).
Optional
options: RemoveGraphDocumentOptionsOptions for removing the vertex.
+const graph = db.graph("some-graph");
const collection = graph.vertexCollection("vertices");
await collection.remove("abc123");
// document with key "abc123" deleted
+
+const graph = db.graph("some-graph");
const collection = graph.vertexCollection("vertices");
const doc = await collection.vertex("abc123");
await collection.remove(doc);
// document with key "abc123" deleted
+
+Replaces an existing vertex in the collection.
+Throws an exception when passed a vertex or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a vertex from this collection).
Optional
options: ReplaceGraphDocumentOptionsOptions for replacing the vertex.
+const graph = db.graph("some-graph");
const collection = graph.collection("vertices");
await collection.save({ _key: "a", color: "blue", count: 1 });
const result = await collection.replace(
"a",
{ color: "red" },
{ returnNew: true }
);
console.log(result.new.color, result.new.count); // "red" undefined
+
+Inserts a new vertex with the given data
into the collection.
The contents of the new vertex.
+Optional
options: InsertGraphDocumentOptionsOptions for inserting the vertex.
+const graph = db.graph("some-graph");
const collection = graph.vertexCollection("friends");
const result = await collection.save(
{ _key: "a", color: "blue", count: 1 },
{ returnNew: true }
);
console.log(result.new.color, result.new.count); // "blue" 1
+
+Updates an existing vertex in the collection.
+Throws an exception when passed a vertex or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a vertex from this collection).
Optional
options: ReplaceGraphDocumentOptionsOptions for updating the vertex.
+const graph = db.graph("some-graph");
const collection = graph.collection("vertices");
await collection.save({ _key: "a", color: "blue", count: 1 });
const result = await collection.update(
"a",
{ count: 2 },
{ returnNew: true }
);
console.log(result.new.color, result.new.count); // "blue" 2
+
+Retrieves the vertex matching the given key or id.
+Throws an exception when passed a vertex or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a vertex from this collection).
Optional
options: ReadGraphDocumentOptionsOptions for retrieving the vertex.
+const graph = db.graph("some-graph");
const collection = graph.vertexCollection("vertices");
try {
const vertex = await collection.vertex("abc123");
console.log(vertex);
} catch (e: any) {
console.error("Could not find vertex");
}
+
+const graph = db.graph("some-graph");
const collection = graph.vertexCollection("vertices");
const vertex = await collection.vertex("abc123", { graceful: true });
if (vertex) {
console.log(vertex);
} else {
console.error("Could not find vertex");
}
+
+Retrieves the vertex matching the given key or id.
+Throws an exception when passed a vertex or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a vertex from this collection).
If set to true
, null
is returned instead of an
+exception being thrown if the vertex does not exist.
const graph = db.graph("some-graph");
const collection = graph.vertexCollection("vertices");
try {
const vertex = await collection.vertex("abc123", false);
console.log(vertex);
} catch (e: any) {
console.error("Could not find vertex");
}
+
+const graph = db.graph("some-graph");
const collection = graph.vertexCollection("vertices");
const vertex = await collection.vertex("abc123", true);
if (vertex) {
console.log(vertex);
} else {
console.error("Could not find vertex");
}
+
+Checks whether a vertex matching the given key or id exists in this +collection.
+Throws an exception when passed a vertex or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a vertex from this collection).
const graph = db.graph("some-graph");
const collection = graph.vertexCollection("vertices");
const exists = await collection.vertexExists("abc123");
if (!exists) {
console.log("Vertex does not exist");
}
+
+Represents an async job in a databases.Database.
+The job's ID.
+Whether the job's results have been loaded. If set to true
, the job's
+result can be accessed from Job.result.
The job's result if it has been loaded or undefined
otherwise.
Fetches the job's completion state.
+Returns true
if the job has completed, false
otherwise.
// poll for the job to complete
while (!(await job.getCompleted())) {
await timeout(1000);
}
// job result is now available and can be loaded
await job.load();
console.log(job.result);
+
+Loads the job's result from the database if it is not already loaded.
+// poll for the job to complete
while (!job.isLoaded) {
await timeout(1000);
const result = await job.load();
console.log(result);
}
// job result is now loaded and can also be accessed from job.result
console.log(job.result);
+
+Represents an arbitrary route relative to an ArangoDB database.
+Headers of this route.
+Path of this route.
+Performs a DELETE request against the given path relative to this route +and returns the server response.
+Path relative to this route.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const res = await foxx.delete("/users/admin");
+
+Performs a DELETE request against the given path relative to this route +and returns the server response.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const user = foxx.roue("/users/admin");
const res = await user.delete();
+
+Performs a GET request against the given path relative to this route +and returns the server response.
+Path relative to this route.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const res = await foxx.get("/users", { offset: 10, limit: 5 });
+
+Performs a GET request against the given path relative to this route +and returns the server response.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const users = foxx.route("/users");
const res = await users.get({ offset: 10, limit: 5 });
+
+Performs a HEAD request against the given path relative to this route +and returns the server response.
+Path relative to this route.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const res = await foxx.head("/users", { offset: 10, limit: 5 });
+
+Performs a HEAD request against the given path relative to this route +and returns the server response.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const users = foxx.route("/users");
const res = await users.head({ offset: 10, limit: 5 });
+
+Performs a PATCH request against the given path relative to this route +and returns the server response.
+Path relative to this route.
+Optional
body: anyBody of the request object.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const res = await foxx.patch("/users/admin", { password: "admin" });
+
+Performs a PATCH request against the given path relative to this route +and returns the server response.
+Note: body
must not be a string
.
Optional
body: anyBody of the request object. Must not be a string.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const user = foxx.route("/users/admin")
const res = await user.patch({ password: "admin" });
+
+Performs a POST request against the given path relative to this route +and returns the server response.
+Path relative to this route.
+Optional
body: anyBody of the request object.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const res = await foxx.post("/users", {
username: "admin",
password: "hunter2"
});
+
+Performs a POST request against the given path relative to this route +and returns the server response.
+Note: body
must not be a string
.
Optional
body: anyBody of the request object. Must not be a string.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const users = foxx.route("/users");
const res = await users.post({
username: "admin",
password: "hunter2"
});
+
+Performs a PUT request against the given path relative to this route +and returns the server response.
+Path relative to this route.
+Optional
body: anyBody of the request object.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const res = await foxx.put("/users/admin/password", { password: "admin" });
+
+Performs a PUT request against the given path relative to this route +and returns the server response.
+Note: body
must not be a string
.
Optional
body: anyBody of the request object. Must not be a string.
+Optional
search: Record<string, any> | URLSearchParamsQuery string parameters for this request.
+Optional
headers: Record<string, string> | HeadersAdditional headers to send with this request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const password = foxx.route("/users/admin/password");
const res = await password.put({ password: "admin" });
+
+Performs an arbitrary HTTP request relative to this route and returns the +server response.
+Options for performing the request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const res = await foxx.request({
method: "POST",
pathname: "/users",
body: {
username: "admin",
password: "hunter2"
}
});
+
+Creates a new route relative to this route that inherits any of its default +HTTP headers.
+Path relative to this route.
+Optional
headers: Record<string, string> | HeadersAdditional headers that will be sent with each request.
+const db = new Database();
const foxx = db.route("/my-foxx-service");
const users = foxx.route("/users");
+
+Represents a streaming transaction in a databases.Database.
+Unique identifier of this transaction.
+ +Attempts to abort the transaction to the databases.
+Options for aborting the transaction.
+const db = new Database();
const col = db.collection("some-collection");
const trx = db.beginTransaction(col);
await trx.step(() => col.save({ hello: "world" }));
const result = await trx.abort();
// result indicates the updated transaction status
+
+Attempts to commit the transaction to the databases.
+Options for comitting the transaction.
+const db = new Database();
const col = db.collection("some-collection");
const trx = db.beginTransaction(col);
await trx.step(() => col.save({ hello: "world" }));
const result = await trx.commit();
// result indicates the updated transaction status
+
+Retrieves general information about the transaction.
+const db = new Database();
const col = db.collection("some-collection");
const trx = db.beginTransaction(col);
await trx.step(() => col.save({ hello: "world" }));
const info = await trx.get();
// the transaction exists
+
+Executes the given function locally as a single step of the transaction.
+Callback function returning a promise.
+Warning: The callback function should wrap a single call of an async
+arangojs method (e.g. a method on a Collection
object of a collection
+that is involved in the transaction or the db.query
method).
+If the callback function is async, only the first promise-returning (or
+async) method call will be executed as part of the transaction. See the
+examples below for how to avoid common mistakes when using this method.
Note: Avoid defining the callback as an async function if possible +as arangojs will throw an error if the callback did not return a promise. +Async functions will return an empty promise by default, making it harder +to notice if you forgot to return something from the callback.
+Note: Although almost anything can be wrapped in a callback and passed +to this method, that does not guarantee ArangoDB can actually do it in a +transaction. Refer to the ArangoDB documentation if you are unsure whether +a given operation can be executed as part of a transaction. Generally any +modification or retrieval of data is eligible but modifications of +collections or databases are not.
+const db = new Database();
const vertices = db.collection("vertices");
const edges = db.collection("edges");
const trx = await db.beginTransaction({ write: [vertices, edges] });
// The following code will be part of the transaction
const left = await trx.step(() => vertices.save({ label: "left" }));
const right = await trx.step(() => vertices.save({ label: "right" }));
// Results from preceding actions can be used normally
await trx.step(() => edges.save({
_from: left._id,
_to: right._id,
data: "potato"
}));
// Transaction must be committed for changes to take effected
// Always call either trx.commit or trx.abort to end a transaction
await trx.commit();
+
+// BAD! If the callback is an async function it must only use await once!
await trx.step(async () => {
await collection.save(data);
await collection.save(moreData); // WRONG
});
// BAD! Callback function must use only one arangojs call!
await trx.step(() => {
return collection.save(data)
.then(() => collection.save(moreData)); // WRONG
});
// BETTER: Wrap every arangojs method call that should be part of the
// transaction in a separate `trx.step` call
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
+
+// BAD! If the callback is an async function it must not await before
// calling an arangojs method!
await trx.step(async () => {
await doSomethingElse();
return collection.save(data); // WRONG
});
// BAD! Any arangojs inside the callback must not happen inside a promise
// method!
await trx.step(() => {
return doSomethingElse()
.then(() => collection.save(data)); // WRONG
});
// BETTER: Perform any async logic needed outside the `trx.step` call
await doSomethingElse();
await trx.step(() => collection.save(data));
// OKAY: You can perform async logic in the callback after the arangojs
// method call as long as it does not involve additional arangojs method
// calls, but this makes it easy to make mistakes later
await trx.step(async () => {
await collection.save(data);
await doSomethingDifferent(); // no arangojs method calls allowed
});
+
+// BAD! The callback should not use any functions that themselves use any
// arangojs methods!
async function saveSomeData() {
await collection.save(data);
await collection.save(moreData);
}
await trx.step(() => saveSomeData()); // WRONG
// BETTER: Pass the transaction to functions that need to call arangojs
// methods inside a transaction
async function saveSomeData(trx) {
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
}
await saveSomeData(); // no `trx.step` call needed
+
+// BAD! You must wait for the promise to resolve (or await on the
// `trx.step` call) before calling `trx.step` again!
trx.step(() => collection.save(data)); // WRONG
await trx.step(() => collection.save(moreData));
// BAD! The trx.step callback can not make multiple calls to async arangojs
// methods, not even using Promise.all!
await trx.step(() => Promise.all([ // WRONG
collection.save(data),
collection.save(moreData),
]));
// BAD! Multiple `trx.step` calls can not run in parallel!
await Promise.all([ // WRONG
trx.step(() => collection.save(data)),
trx.step(() => collection.save(moreData)),
]));
// BETTER: Always call `trx.step` sequentially, one after the other
await trx.step(() => collection.save(data));
await trx.step(() => collection.save(moreData));
// OKAY: The then callback can be used if async/await is not available
trx.step(() => collection.save(data))
.then(() => trx.step(() => collection.save(moreData)));
+
+// BAD! The callback will return an empty promise that resolves before
// the inner arangojs method call has even talked to ArangoDB!
await trx.step(async () => {
collection.save(data); // WRONG
});
// BETTER: Use an arrow function so you don't forget to return
await trx.step(() => collection.save(data));
// OKAY: Remember to always return when using a function body
await trx.step(() => {
return collection.save(data); // easy to forget!
});
// OKAY: You do not have to use arrow functions but it helps
await trx.step(function () {
return collection.save(data);
});
+
+// BAD! You can not pass promises instead of a callback!
await trx.step(collection.save(data)); // WRONG
// BETTER: Wrap the code in a function and pass the function instead
await trx.step(() => collection.save(data));
+
+// WORSE: Calls to non-async arangojs methods don't need to be performed
// as part of a transaction
const collection = await trx.step(() => db.collection("my-documents"));
// BETTER: If an arangojs method is not async and doesn't return promises,
// call it without `trx.step`
const collection = db.collection("my-documents");
+
+Represents a View in a databases.Database.
+Name of the View.
+Creates a View with the given options
and the instance's name.
See also databases.Database#createView.
+const db = new Database();
const view = db.view("potatoes");
await view.create();
// the ArangoSearch View "potatoes" now exists
+
+Retrieves general information about the View.
+const db = new Database();
const view = db.view("some-view");
const data = await view.get();
// data contains general information about the View
+
+Retrieves the View's properties.
+const db = new Database();
const view = db.view("some-view");
const data = await view.properties();
// data contains the View's properties
+
+Renames the View and updates the instance's name
to newName
.
Additionally removes the instance from the databases.Database's +internal cache.
+Note: Renaming Views may not be supported when ArangoDB is +running in a cluster configuration.
+The new name of the View.
+const db = new Database();
const view1 = db.view("some-view");
await view1.rename("other-view");
const view2 = db.view("some-view");
const view3 = db.view("other-view");
// Note all three View instances are different objects but
// view1 and view3 represent the same ArangoDB view!
+
+Replaces the properties of the View.
+Optional
properties: PropertiesNew properties of the View.
+const db = new Database();
const view = db.view("some-view");
const result = await view.replaceProperties({
consolidationIntervalMsec: 234
});
console.log(result.consolidationIntervalMsec); // 234
+
+Updates the properties of the View.
+Optional
properties: PropertiesProperties of the View to update.
+const db = new Database();
const view = db.view("some-view");
const result = await view.updateProperties({
consolidationIntervalMsec: 234
});
console.log(result.consolidationIntervalMsec); // 234
+
+Template string handler (template tag) for AQL queries.
+The aql
tag can be used to write complex AQL queries as multi-line strings
+without having to worry about bindVars
and the distinction between
+collections and regular parameters.
Tagged template strings will return an AqlQuery object with
+query
and bindVars
attributes reflecting any interpolated values.
Any collections.ArangoCollection instance used in a query string will +be recognized as a collection reference and generate an AQL collection bind +parameter instead of a regular AQL value bind parameter.
+Note: you should always use the aql
template tag when writing
+dynamic AQL queries instead of using untagged (normal) template strings.
+Untagged template strings will inline any interpolated values and return
+a plain string as result. The aql
template tag will only inline references
+to the interpolated values and produce an AQL query object containing both
+the query and the values. This prevents most injection attacks when using
+untrusted values in dynamic queries.
Rest
...args: AqlValue[]// Some user-supplied string that may be malicious
const untrustedValue = req.body.email;
// Without aql tag: BAD! DO NOT DO THIS!
const badQuery = `
FOR user IN users
FILTER user.email == "${untrustedValue}"
RETURN user
`;
// e.g. if untrustedValue is '" || user.admin == true || "':
// Query:
// FOR user IN users
// FILTER user.email == "" || user.admin == true || ""
// RETURN user
// With the aql tag: GOOD! MUCH SAFER!
const betterQuery = aql`
FOR user IN users
FILTER user.email == ${untrustedValue}
RETURN user
`;
// Query:
// FOR user IN users
// FILTER user.email == @value0
// RETURN user
// Bind parameters:
// value0 -> untrustedValue
+
+const collection = db.collection("some-collection");
const minValue = 23;
const result = await db.query(aql`
FOR d IN ${collection}
FILTER d.num > ${minValue}
RETURN d
`);
// Equivalent raw query object
const result2 = await db.query({
query: `
FOR d IN @@collection
FILTER d.num > @minValue
RETURN d
`,
bindVars: {
"@collection": collection.name,
minValue: minValue
}
});
+
+const collection = db.collection("some-collection");
const color = "green";
const filter = aql`FILTER d.color == ${color}'`;
const result = await db.query(aql`
FOR d IN ${collection}
${filter}
RETURN d
`);
+
+Indicates whether the given value is an AqlLiteral.
+A value that might be an AqlLiteral
.
Constructs AqlQuery objects from an array of arbitrary values.
+Note: Nesting aql
template strings is a much safer alternative
+for most use cases. This low-level helper function only exists to
+complement the aql
tag when constructing complex queries from dynamic
+arrays of query fragments.
Array of values to join. These values will behave exactly
+like values interpolated in an aql
template string.
Seperator to insert between values. This value will behave +exactly like a value passed to literal, i.e. it will be +inlined as-is, rather than being converted into a bind parameter.
+const users = db.collection("users");
const filters = [];
if (adminsOnly) filters.push(aql`FILTER user.admin`);
if (activeOnly) filters.push(aql`FILTER user.active`);
const result = await db.query(aql`
FOR user IN ${users}
${join(filters)}
RETURN user
`);
+
+const users = db.collection("users");
const keys = ["jreyes", "ghermann"];
// BAD! NEEDLESSLY COMPLEX!
const docs = keys.map(key => aql`DOCUMENT(${users}, ${key}`));
const result = await db.query(aql`
FOR user IN [
${join(docs, ", ")}
]
RETURN user
`);
// Query:
// FOR user IN [
// DOCUMENT(@@value0, @value1), DOCUMENT(@@value0, @value2)
// ]
// RETURN user
// Bind parameters:
// @value0 -> "users"
// value1 -> "jreyes"
// value2 -> "ghermann"
// GOOD! MUCH SIMPLER!
const result = await db.query(aql`
FOR key IN ${keys}
LET user = DOCUMENT(${users}, key)
RETURN user
`);
// Query:
// FOR user IN @value0
// LET user = DOCUMENT(@@value1, key)
// RETURN user
// Bind parameters:
// value0 -> ["jreyes", "ghermann"]
// @value1 -> "users"
+
+Marks an arbitrary scalar value (i.e. a string, number or boolean) as
+safe for being inlined directly into AQL queries when used in an aql
+template string, rather than being converted into a bind parameter.
Note: Nesting aql
template strings is a much safer alternative for
+most use cases. This low-level helper function only exists to help with
+rare edge cases where a trusted AQL query fragment must be read from a
+string (e.g. when reading query fragments from JSON) and should only be
+used as a last resort.
// BAD! DO NOT DO THIS!
const sortDirection = literal('ASC');
// GOOD! DO THIS INSTEAD!
const sortDirection = aql`ASC`;
+
+// BAD! DO NOT DO THIS!
const filterColor = literal('FILTER d.color == "green"');
const result = await db.query(aql`
FOR d IN some-collection
${filterColor}
RETURN d
`);
// GOOD! DO THIS INSTEAD!
const color = "green";
const filterColor = aql`FILTER d.color === ${color}`;
const result = await db.query(aql`
FOR d IN some-collection
${filterColor}
RETURN d
`);
+
+// WARNING: We explicitly trust the environment variable to be safe!
const filter = literal(process.env.FILTER_STATEMENT);
const users = await db.query(aql`
FOR user IN users
${filter}
RETURN user
`);
+
+Coerces the given collection name or ArangoCollection object to +a string representing the collection name.
+Collection name or ArangoCollection object.
+Indicates whether the given value represents an ArangoCollection.
+A value that might be a collection.
+Determines if the given credentials are for Bearer token authentication.
+Returns the status message for the given response's status code or the +status text of the response.
+Indicates whether the given value represents an ArangoDB error response.
+Indicates whether the given status code represents a redirect.
+Indicates whether the given value represents an ArangoError.
+A value that might be an ArangoError
.
Indicates whether the given value represents a NetworkError.
+A value that might be a NetworkError
.
Creates a new Database
instance with its own connection pool.
This is a wrapper function for the :constructor.
+Optional
config: ConfigOptionsAn object with configuration options.
+const db = arangojs({
url: "http://127.0.0.1:8529",
databaseName: "myDatabase",
auth: { username: "admin", password: "hunter2" },
});
+
+Creates a new Database
instance with its own connection pool.
This is a wrapper function for the :constructor.
+Base URL of the ArangoDB server or list of server URLs.
+Equivalent to the url
option in configuration.ConfigOptions.
Optional
name: stringconst db = arangojs("http://127.0.0.1:8529", "myDatabase");
db.useBasicAuth("admin", "hunter2");
+
+Indicates whether the given value represents a Transaction.
+A value that might be a transaction.
+The official ArangoDB JavaScript client for Node.js and the browser.
+ + +npm install --save arangojs
## - or -
yarn add arangojs
+
+When using modern JavaScript tooling with a bundler and compiler (e.g. Babel),
+arangojs can be installed using npm
or yarn
like any other dependency.
You can also use jsDelivr CDN during development:
+<script type="importmap">
{
"imports": {
"arangojs": "https://cdn.jsdelivr.net/npm/arangojs@9.0.0/esm/index.js?+esm"
}
}
</script>
<script type="module">
import { Database } from "arangojs";
const db = new Database();
// ...
</script>
+
+Modern JavaScript/TypeScript with async/await and ES Modules:
+import { Database, aql } from "arangojs";
const db = new Database();
const Pokemons = db.collection("my-pokemons");
async function main() {
try {
const pokemons = await db.query(aql`
FOR pokemon IN ${Pokemons}
FILTER pokemon.type == "fire"
RETURN pokemon
`);
console.log("My pokemans, let me show you them:");
for await (const pokemon of pokemons) {
console.log(pokemon.name);
}
} catch (err) {
console.error(err.message);
}
}
main();
+
+Using a different database:
+const db = new Database({
url: "http://127.0.0.1:8529",
databaseName: "pancakes",
auth: { username: "root", password: "hunter2" },
});
// The credentials can be swapped at any time
db.useBasicAuth("admin", "maplesyrup");
+
+Old-school JavaScript with promises and CommonJS:
+var arangojs = require("arangojs");
var Database = arangojs.Database;
var db = new Database();
var pokemons = db.collection("pokemons");
db.query({
query: "FOR p IN @@c FILTER p.type == 'fire' RETURN p",
bindVars: { "@c": "pokemons" },
})
.then(function (cursor) {
console.log("My pokemons, let me show you them:");
return cursor.forEach(function (pokemon) {
console.log(pokemon.name);
});
})
.catch(function (err) {
console.error(err.message);
});
+
+Note: The examples throughout this documentation use async
/await
+and other modern language features like multi-line strings and template tags.
+When developing for an environment without support for these language features,
+substitute promises for await
syntax as in the above example.
The arangojs driver is compatible with the latest stable version of ArangoDB +available at the time of the driver release and remains compatible with the +two most recent Node.js LTS versions in accordance with the official +Node.js long-term support schedule. Versions +of ArangoDB that have reached their end of life +by the time of a driver release are explicitly not supported.
+For a list of changes between recent versions of the driver, see the +CHANGELOG.
+Note: arangojs is only intended to be used in Node.js or a browser to access
+ArangoDB from outside the database. If you are looking for the ArangoDB
+JavaScript API for Foxx or for accessing ArangoDB
+from within the arangosh
interactive shell, please refer to the documentation
+of the @arangodb
module
+and the db
object instead.
If the server returns an ArangoDB error response, arangojs will throw an
+ArangoError
with an errorNum
property indicating the ArangoDB error code
+and expose the response body as the response
property of the error object.
For all other errors during the request/response cycle, arangojs will throw a
+NetworkError
or a more specific subclass thereof and expose the originating
+request object as the request
property of the error object.
If the server responded with a non-2xx status code, this NetworkError
will
+be an HttpError
with a code
property indicating the HTTP status code of the
+response and a response
property containing the response object itself.
If the error is caused by an exception, the originating exception will be
+available as the cause
property of the error object thrown by arangojs. For
+network errors, this will often be a TypeError
.
In Node.js, network errors caused by a TypeError
will often have a cause
+property containing a more detailed exception.
Specifically, these are often either system errors (represented by regular
+Error
objects with additional properties) or errors from the undici
module
+Node.js uses internally for its native fetch
implementation.
Node.js system error objects provide a code
property containing the specific
+string error code, a syscall
property identifying the underlying system call
+that triggered the error (e.g. connect
), as well as other helpful properties.
For more details on Node.js system errors, see the Node.js documentation of the
+SystemError
interface
+as well as the section on
+Node.js error codes.
For more details on the errors thrown by undici
, see the
+undici errors documentation.
Please make sure you are using the latest version of this driver and that the +version of the arangojs documentation you are reading matches that version.
+Changes in the major version number of arangojs (e.g. 8.x.y -> 9.0.0) indicate +backwards-incompatible changes in the arangojs API that may require changes in +your code when upgrading your version of arangojs.
+Additionally please ensure that your version of Node.js (or browser) and +ArangoDB are supported by the version of arangojs you are trying to use. See +the compatibility section for additional information.
+You can install an older version of arangojs using npm
or yarn
:
# for version 8.x.x
yarn add arangojs@8
# - or -
npm install --save arangojs@8
+
+If you are using require
to import the arangojs
module in JavaScript, the
+default export might not be recognized as a function by the code intelligence
+of common editors like Visual Studio Code, breaking auto-complete and other
+useful features.
As a workaround, use the arangojs
function exported by that module instead
+of calling the module itself:
const arangojs = require("arangojs");
- const db = arangojs({
+ const db = arangojs.arangojs({
url: ARANGODB_SERVER,
});
+
+Alternatively you can use the Database
class directly:
const arangojs = require("arangojs");
+ const Database = arangojs.Database;
- const db = arangojs({
+ const db = new Database({
url: ARANGODB_SERVER,
});
+
+Or using object destructuring:
+- const arangojs = require("arangojs");
+ const { Database } = require("arangojs");
- const db = arangojs({
+ const db = new Database({
url: ARANGODB_SERVER,
});
+
+Due to the async, queue-based behavior of arangojs, the stack traces generated +when an error occur rarely provide enough information to determine the location +in your own code where the request was initiated.
+Using the precaptureStackTraces
configuration option, arangojs will attempt
+to always generate stack traces proactively when a request is performed,
+allowing arangojs to provide more meaningful stack traces at the cost of an
+impact to performance even when no error occurs.
import { Database } from "arangojs";
const db = new Database({
url: ARANGODB_SERVER,
+ precaptureStackTraces: true,
});
+
+Note that arangojs will attempt to use Error.captureStackTrace
if available
+and fall back to generating a stack trace by throwing an error. In environments
+that do not support the stack
property on error objects, this option will
+still impact performance but not result in any additional information becoming
+available.
If you want to use Unix domain sockets, you need to install the undici
module,
+which is an optional dependency of arangojs.
npm install --save undici
+
+If the undici
module is not installed and arangojs attempts to make a request
+over a Unix domain socket, the request will fail with a plain Error
with a
+message indicating that the undici
module is unavailable.
If you need to support self-signed HTTPS certificates in Node.js, you will need
+to install the undici
module, which is an optional dependency of arangojs.
npm install --save undici
+
+You can instruct arangojs to use the undici
module by setting the
+config.agentOptions
option:
import { Database } from "arangojs";
const db = new Database({
url: ARANGODB_SERVER,
+ agentOptions: {
+ ca: [
+ fs.readFileSync(".ssl/sub.class1.server.ca.pem"),
+ fs.readFileSync(".ssl/ca.pem"),
+ ],
+ },
});
+
+To override the global fetch agent instead, you can use the undici
module's
+setGlobalDispatcher
method as follows:
import { Agent, setGlobalDispatcher } from "undici";
setGlobalDispatcher(
new Agent({
ca: [
fs.readFileSync(".ssl/sub.class1.server.ca.pem"),
fs.readFileSync(".ssl/ca.pem"),
],
})
);
+
+Although this is strongly discouraged, it's also possible to disable +HTTPS certificate validation entirely this way, but note this has +extremely dangerous security implications:
+ import { Database } from "arangojs";
const db = new Database({
url: ARANGODB_SERVER,
+ agentOptions: {
+ rejectUnauthorized: false,
+ },
});
+
+The requirement to use the undici
module to override these settings is a
+known limitation
+of Node.js at the time of this writing.
When using arangojs in the browser, self-signed HTTPS certificates need to +be trusted by the browser or use a trusted root certificate.
+When using the transaction.step
method it is important to be aware of the
+limitations of what a callback passed to this method is allowed to do.
const collection = db.collection(collectionName);
const trx = db.transaction(transactionId);
// WARNING: This code will not work as intended!
await trx.step(async () => {
await collection.save(doc1);
await collection.save(doc2); // Not part of the transaction!
});
// INSTEAD: Always perform a single operation per step:
await trx.step(() => collection.save(doc1));
await trx.step(() => collection.save(doc2));
+
+Please refer to the documentation of the transaction.step
method
+for additional examples.
Example messages: transaction not found
, transaction already expired
.
Transactions have +different guarantees +in a cluster.
+When using arangojs in a cluster with load balancing, you may need to adjust
+the value of config.poolSize
to accommodate the number of transactions
+you need to be able to run in parallel. The default value is likely to be too
+low for most cluster scenarios involving frequent streaming transactions.
Note: When using a high value for config.poolSize
you may have
+to adjust the maximum number of threads in the ArangoDB configuration using
+the server.maximal-threads
option
+to support larger numbers of concurrent transactions on the server side.
The Apache License, Version 2.0. For more information, see the accompanying +LICENSE file.
+Includes code from x3-linkedlist +used under the MIT license.
+An object providing methods for accessing queue time metrics of the most +recently received server responses if the server supports this feature.
+An object representing a trusted AQL literal that will be inlined directly +when used in an AQL template or passed to AQL helper functions.
+Arbitrary values can be converted to trusted AQL literals by passing them +to the literal helper function.
+Generic AQL query object consisting of an AQL query string and its bind +parameters.
+Optional
[type]An object mapping AQL bind parameter names to their respective values.
+Names of parameters representing collections are prefixed with an +at-symbol.
+An AQL query string.
+A marker interface identifying objects that can be used in AQL template +strings to create references to ArangoDB collections.
+See aql.aql.
+Readonly
nameName of the collection.
+Represents an document collection in a databases.Database.
+See EdgeCollection for a variant of this interface more suited for +edge collections.
+When using TypeScript, collections can be cast to a specific document data +type to increase type safety.
+interface Person {
name: string;
}
const db = new Database();
const documents = db.collection("persons") as DocumentCollection<Person>;
+
+Type to represent document contents returned by the +server (including computed properties).
+Type to represent document contents passed when +inserting or replacing documents (without computed properties).
+Readonly
databaseDatabase this collection belongs to.
+Readonly
nameName of the collection.
+Retrieves the collection checksum.
+Optional
options: CollectionChecksumOptionsOptions for retrieving the checksum.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.checksum();
// data contains the collection's checksum
+
+Triggers compaction for a collection.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.compact();
// Background compaction is triggered on the collection
+
+Retrieves information about the number of documents in a collection.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.count();
// data contains the collection's count
+
+Creates a collection with the given options
and the instance's name.
See also databases.Database#createCollection and +databases.Database#createEdgeCollection.
+Note: When called on an EdgeCollection instance in TypeScript,
+the type
option must still be set to the correct CollectionType.
+Otherwise this will result in the collection being created with the
+default type (i.e. as a document collection).
Optional
options: CollectionPropertiesOptions & { Options for creating the collection.
+const db = new Database();
const collection = db.collection("potatoes");
await collection.create();
// the document collection "potatoes" now exists
+
+const db = new Database();
const collection = db.collection("friends");
await collection.create({ type: CollectionType.EDGE_COLLECTION });
// the edge collection "friends" now exists
+
+interface Friend {
startDate: number;
endDate?: number;
}
const db = new Database();
const collection = db.collection("friends") as EdgeCollection<Friend>;
// even in TypeScript you still need to indicate the collection type
// if you want to create an edge collection
await collection.create({ type: CollectionType.EDGE_COLLECTION });
// the edge collection "friends" now exists
+
+Retrieves the document matching the given key or id.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: ReadDocumentOptionsOptions for retrieving the document.
+const db = new Database();
const collection = db.collection("some-collection");
try {
const document = await collection.document("abc123");
console.log(document);
} catch (e: any) {
console.error("Could not find document");
}
+
+const db = new Database();
const collection = db.collection("some-collection");
const document = await collection.document("abc123", { graceful: true });
if (document) {
console.log(document);
} else {
console.error("Could not find document");
}
+
+Retrieves the document matching the given key or id.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
If set to true
, null
is returned instead of an
+exception being thrown if the document does not exist.
const db = new Database();
const collection = db.collection("some-collection");
try {
const document = await collection.document("abc123", false);
console.log(document);
} catch (e: any) {
console.error("Could not find document");
}
+
+const db = new Database();
const collection = db.collection("some-collection");
const document = await collection.document("abc123", true);
if (document) {
console.log(document);
} else {
console.error("Could not find document");
}
+
+Checks whether a document matching the given key or id exists in this +collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: DocumentExistsOptionsconst db = new Database();
const collection = db.collection("some-collection");
const exists = await collection.documentExists("abc123");
if (!exists) {
console.log("Document does not exist");
}
+
+Derives a document _id
from the given selector for this collection.
Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
const db = new Database();
const collection = db.collection("some-collection");
const meta = await collection.save({ foo: "bar" }, { returnNew: true });
const doc = meta.new;
console.log(collection.documentId(meta)); // via meta._id
console.log(collection.documentId(doc)); // via doc._id
console.log(collection.documentId(meta._key)); // also works
+
+const db = new Database();
const collection1 = db.collection("some-collection");
const collection2 = db.collection("other-collection");
const meta = await collection1.save({ foo: "bar" });
// Mixing collections is usually a mistake
console.log(collection1.documentId(meta)); // ok: same collection
console.log(collection2.documentId(meta)); // throws: wrong collection
console.log(collection2.documentId(meta._id)); // also throws
console.log(collection2.documentId(meta._key)); // ok but wrong collection
+
+Retrieves the documents matching the given key or id values.
+Throws an exception when passed a document or _id
from a different
+collection, or if the document does not exist.
Array of document _key
, _id
or objects with either
+of those properties (e.g. a document from this collection).
Optional
options: BulkReadDocumentsOptionsOptions for retrieving the documents.
+const db = new Database();
const collection = db.collection("some-collection");
try {
const documents = await collection.documents(["abc123", "xyz456"]);
console.log(documents);
} catch (e: any) {
console.error("Could not find document");
}
+
+Deletes the collection from the database.
+Optional
options: DropCollectionOptionsOptions for dropping the collection.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.drop();
// The collection "some-collection" is now an ex-collection
+
+Deletes the index with the given name or id
from the database.
Index name, id or object with either property.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.dropIndex("some-index");
// The index "some-index" no longer exists
+
+Creates a persistent index on the collection if it does not already exist.
+Options for creating the persistent index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create a unique index for looking up documents by username
await collection.ensureIndex({
type: "persistent",
fields: ["username"],
name: "unique-usernames",
unique: true
});
+
+Creates a TTL index on the collection if it does not already exist.
+Options for creating the TTL index.
+const db = new Database();
const collection = db.collection("some-collection");
// Expire documents with "createdAt" timestamp one day after creation
await collection.ensureIndex({
type: "ttl",
fields: ["createdAt"],
expireAfter: 60 * 60 * 24 // 24 hours
});
+
+const db = new Database();
const collection = db.collection("some-collection");
// Expire documents with "expiresAt" timestamp according to their value
await collection.ensureIndex({
type: "ttl",
fields: ["expiresAt"],
expireAfter: 0 // when attribute value is exceeded
});
+
+Creates a multi-dimensional index on the collection if it does not already exist.
+Options for creating the multi-dimensional index.
+const db = new Database();
const collection = db.collection("some-points");
// Create a multi-dimensional index for the attributes x, y and z
await collection.ensureIndex({
type: "mdi",
fields: ["x", "y", "z"],
fieldValueTypes: "double"
});
+
+
+
+Creates a geo index on the collection if it does not already exist.
+Options for creating the geo index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create an index for GeoJSON data
await collection.ensureIndex({
type: "geo",
fields: ["lngLat"],
geoJson: true
});
+
+Creates a inverted index on the collection if it does not already exist.
+Options for creating the inverted index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create an inverted index
await collection.ensureIndex({
type: "inverted",
fields: ["a", { name: "b", analyzer: "text_en" }]
});
+
+Creates an index on the collection if it does not already exist.
+Options for creating the index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create a unique index for looking up documents by username
await collection.ensureIndex({
type: "persistent",
fields: ["username"],
name: "unique-usernames",
unique: true
});
+
+Retrieves statistics for a collection.
+Optional
details: booleanwhether to return extended storage engine-specific details +to the figures, which may cause additional load and impact performance
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.figures();
// data contains the collection's figures
+
+Retrieves general information about the collection.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.get();
// data contains general information about the collection
+
+Retrieves the shardId
of the shard responsible for the given document.
Document in the collection to look up the shardId
of.
const db = new Database();
const collection = db.collection("some-collection");
const responsibleShard = await collection.getResponsibleShard();
+
+Bulk imports the given data
into the collection.
The data to import, as an array of document data.
+Optional
options: ImportDocumentsOptionsOptions for importing the data.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
[
{ _key: "jcd", password: "bionicman" },
{ _key: "jreyes", password: "amigo" },
{ _key: "ghermann", password: "zeitgeist" }
]
);
+
+Bulk imports the given data
into the collection.
The data to import, as an array containing a single array of +attribute names followed by one or more arrays of attribute values for +each document.
+Optional
options: ImportDocumentsOptionsOptions for importing the data.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
[
[ "_key", "password" ],
[ "jcd", "bionicman" ],
[ "jreyes", "amigo" ],
[ "ghermann", "zeitgeist" ]
]
);
+
+Bulk imports the given data
into the collection.
If type
is omitted, data
must contain one JSON array per line with
+the first array providing the attribute names and all other arrays
+providing attribute values for each document.
If type
is set to "documents"
, data
must contain one JSON document
+per line.
If type
is set to "list"
, data
must contain a JSON array of
+documents.
If type
is set to "auto"
, data
can be in either of the formats
+supported by "documents"
or "list"
.
The data to import as a Buffer (Node), Blob (browser) or +string.
+Optional
options: ImportDocumentsOptions & { Options for importing the data.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
'{"_key":"jcd","password":"bionicman"}\r\n' +
'{"_key":"jreyes","password":"amigo"}\r\n' +
'{"_key":"ghermann","password":"zeitgeist"}\r\n',
{ type: "documents" } // or "auto"
);
+
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
'[{"_key":"jcd","password":"bionicman"},' +
'{"_key":"jreyes","password":"amigo"},' +
'{"_key":"ghermann","password":"zeitgeist"}]',
{ type: "list" } // or "auto"
);
+
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
'["_key","password"]\r\n' +
'["jcd","bionicman"]\r\n' +
'["jreyes","amigo"]\r\n' +
'["ghermann","zeitgeist"]\r\n'
);
+
+Returns an index description by name or id
if it exists.
Index name, id or object with either property.
+const db = new Database();
const collection = db.collection("some-collection");
const index = await collection.index("some-index");
+
+Returns a list of all index descriptions for the collection.
+Optional
options: ListIndexesOptionsOptions for fetching the index list.
+const db = new Database();
const collection = db.collection("some-collection");
const indexes = await collection.indexes();
+
+const db = new Database();
const collection = db.collection("some-collection");
const allIndexes = await collection.indexes<HiddenIndexDescription>({
withHidden: true
});
+
+Instructs ArangoDB to load as many indexes of the collection into memory +as permitted by the memory limit.
+const db = new Database();
const collection = db.collection("indexed-collection");
await collection.loadIndexes();
// the indexes are now loaded into memory
+
+Retrieves the collection's properties.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.properties();
// data contains the collection's properties
+
+Replaces the properties of the collection.
+const db = new Database();
const collection = db.collection("some-collection");
const result = await collection.setProperties({ waitForSync: true });
// the collection will now wait for data being written to disk
// whenever a document is changed
+
+Instructs ArangoDB to recalculate the collection's document count to fix +any inconsistencies.
+const db = new Database();
const collection = db.collection("inconsistent-collection");
const badData = await collection.count();
// oh no, the collection count looks wrong -- fix it!
await collection.recalculateCount();
const goodData = await collection.count();
// goodData contains the collection's improved count
+
+Removes an existing document from the collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: RemoveDocumentOptionsOptions for removing the document.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.remove("abc123");
// document with key "abc123" deleted
+
+const db = new Database();
const collection = db.collection("some-collection");
const doc = await collection.document("abc123");
await collection.remove(doc);
// document with key "abc123" deleted
+
+Removes existing documents from the collection.
+Throws an exception when passed any document or _id
from a different
+collection.
Documents _key
, _id
or objects with either of those
+properties (e.g. documents from this collection).
Optional
options: Omit<RemoveDocumentOptions, "ifMatch">Options for removing the documents.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.removeAll(["abc123", "def456"]);
// document with keys "abc123" and "def456" deleted
+
+Renames the collection and updates the instance's name
to newName
.
Additionally removes the instance from the databases.Database's internal +cache.
+Note: Renaming collections may not be supported when ArangoDB is +running in a cluster configuration.
+The new name of the collection.
+const db = new Database();
const collection1 = db.collection("some-collection");
await collection1.rename("other-collection");
const collection2 = db.collection("some-collection");
const collection3 = db.collection("other-collection");
// Note all three collection instances are different objects but
// collection1 and collection3 represent the same ArangoDB collection!
+
+Replaces an existing document in the collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
The contents of the new document.
+Optional
options: ReplaceDocumentOptionsOptions for replacing the document.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.save({ _key: "a", color: "blue", count: 1 });
const result = await collection.replace(
"a",
{ color: "red" },
{ returnNew: true }
);
console.log(result.new.color, result.new.count); // "red" undefined
+
+Replaces existing documents in the collection, identified by the _key
or
+_id
of each document.
The documents to replace.
+Optional
options: Omit<ReplaceDocumentOptions, "ifMatch">Options for replacing the documents.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.save({ _key: "a", color: "blue", count: 1 });
await collection.save({ _key: "b", color: "green", count: 3 });
const result = await collection.replaceAll(
[
{ _key: "a", color: "red" },
{ _key: "b", color: "yellow", count: 2 }
],
{ returnNew: true }
);
console.log(result[0].new.color, result[0].new.count); // "red" undefined
console.log(result[1].new.color, result[1].new.count); // "yellow" 2
+
+Retrieves the collection revision ID.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.revision();
// data contains the collection's revision
+
+Inserts a new document with the given data
into the collection.
The contents of the new document.
+Optional
options: InsertDocumentOptionsOptions for inserting the document.
+const db = new Database();
const collection = db.collection("some-collection");
const result = await collection.save(
{ _key: "a", color: "blue", count: 1 },
{ returnNew: true }
);
console.log(result.new.color, result.new.count); // "blue" 1
+
+Inserts new documents with the given data
into the collection.
The contents of the new documents.
+Optional
options: InsertDocumentOptionsOptions for inserting the documents.
+const db = new Database();
const collection = db.collection("some-collection");
const result = await collection.saveAll(
[
{ _key: "a", color: "blue", count: 1 },
{ _key: "b", color: "red", count: 2 },
],
{ returnNew: true }
);
console.log(result[0].new.color, result[0].new.count); // "blue" 1
console.log(result[1].new.color, result[1].new.count); // "red" 2
+
+Deletes all documents in the collection.
+Optional
options: TruncateCollectionOptionsconst db = new Database();
const collection = db.collection("some-collection");
await collection.truncate();
// millions of documents cry out in terror and are suddenly silenced,
// the collection "some-collection" is now empty
+
+Updates an existing document in the collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
The data for updating the document.
+Optional
options: UpdateDocumentOptionsOptions for updating the document.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.save({ _key: "a", color: "blue", count: 1 });
const result = await collection.update(
"a",
{ count: 2 },
{ returnNew: true }
);
console.log(result.new.color, result.new.count); // "blue" 2
+
+Updates existing documents in the collection, identified by the _key
or
+_id
of each document.
The data for updating the documents.
+Optional
options: Omit<UpdateDocumentOptions, "ifMatch">Options for updating the documents.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.save({ _key: "a", color: "blue", count: 1 });
await collection.save({ _key: "b", color: "green", count: 3 });
const result = await collection.updateAll(
[
{ _key: "a", count: 2 },
{ _key: "b", count: 4 }
],
{ returnNew: true }
);
console.log(result[0].new.color, result[0].new.count); // "blue" 2
console.log(result[1].new.color, result[1].new.count); // "green" 4
+
+Represents an edge collection in a databases.Database.
+See DocumentCollection for a more generic variant of this interface +more suited for regular document collections.
+See also graphs.GraphEdgeCollection for the type representing an edge +collection in a graphs.Graph.
+When using TypeScript, collections can be cast to a specific edge document +data type to increase type safety.
+interface Friend {
startDate: number;
endDate?: number;
}
const db = new Database();
const edges = db.collection("friends") as EdgeCollection<Friend>;
+
+Type to represent edge document contents returned +by the server (including computed properties).
+Type to represent edge document contents passed when +inserting or replacing edge documents (without computed properties).
+Readonly
databaseDatabase this collection belongs to.
+Readonly
nameName of the collection.
+Retrieves the collection checksum.
+Optional
options: CollectionChecksumOptionsOptions for retrieving the checksum.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.checksum();
// data contains the collection's checksum
+
+Triggers compaction for a collection.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.compact();
// Background compaction is triggered on the collection
+
+Retrieves information about the number of documents in a collection.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.count();
// data contains the collection's count
+
+Creates a collection with the given options
and the instance's name.
See also databases.Database#createCollection and +databases.Database#createEdgeCollection.
+Note: When called on an EdgeCollection instance in TypeScript,
+the type
option must still be set to the correct CollectionType.
+Otherwise this will result in the collection being created with the
+default type (i.e. as a document collection).
Optional
options: CollectionPropertiesOptions & { Options for creating the collection.
+const db = new Database();
const collection = db.collection("potatoes");
await collection.create();
// the document collection "potatoes" now exists
+
+const db = new Database();
const collection = db.collection("friends");
await collection.create({ type: CollectionType.EDGE_COLLECTION });
// the edge collection "friends" now exists
+
+interface Friend {
startDate: number;
endDate?: number;
}
const db = new Database();
const collection = db.collection("friends") as EdgeCollection<Friend>;
// even in TypeScript you still need to indicate the collection type
// if you want to create an edge collection
await collection.create({ type: CollectionType.EDGE_COLLECTION });
// the edge collection "friends" now exists
+
+Retrieves the document matching the given key or id.
+Throws an exception when passed a document or _id
from a different
+collection, or if the document does not exist.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: ReadDocumentOptionsOptions for retrieving the document.
+const db = new Database();
const collection = db.collection("some-collection");
try {
const document = await collection.document("abc123");
console.log(document);
} catch (e: any) {
console.error("Could not find document");
}
+
+const db = new Database();
const collection = db.collection("some-collection");
const document = await collection.document("abc123", { graceful: true });
if (document) {
console.log(document);
} else {
console.error("Document does not exist");
}
+
+Retrieves the document matching the given key or id.
+Throws an exception when passed a document or _id
from a different
+collection, or if the document does not exist.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
If set to true
, null
is returned instead of an
+exception being thrown if the document does not exist.
const db = new Database();
const collection = db.collection("some-collection");
try {
const document = await collection.document("abc123", false);
console.log(document);
} catch (e: any) {
console.error("Could not find document");
}
+
+const db = new Database();
const collection = db.collection("some-collection");
const document = await collection.document("abc123", true);
if (document) {
console.log(document);
} else {
console.error("Document does not exist");
}
+
+Checks whether a document matching the given key or id exists in this +collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: DocumentExistsOptionsconst db = new Database();
const collection = db.collection("some-collection");
const exists = await collection.documentExists("abc123");
if (!exists) {
console.log("Document does not exist");
}
+
+Derives a document _id
from the given selector for this collection.
Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
const db = new Database();
const collection = db.collection("some-collection");
const meta = await collection.save({ foo: "bar" }, { returnNew: true });
const doc = meta.new;
console.log(collection.documentId(meta)); // via meta._id
console.log(collection.documentId(doc)); // via doc._id
console.log(collection.documentId(meta._key)); // also works
+
+const db = new Database();
const collection1 = db.collection("some-collection");
const collection2 = db.collection("other-collection");
const meta = await collection1.save({ foo: "bar" });
// Mixing collections is usually a mistake
console.log(collection1.documentId(meta)); // ok: same collection
console.log(collection2.documentId(meta)); // throws: wrong collection
console.log(collection2.documentId(meta._id)); // also throws
console.log(collection2.documentId(meta._key)); // ok but wrong collection
+
+Retrieves the documents matching the given key or id values.
+Throws an exception when passed a document or _id
from a different
+collection, or if the document does not exist.
Array of document _key
, _id
or objects with either
+of those properties (e.g. a document from this collection).
Optional
options: BulkReadDocumentsOptionsOptions for retrieving the documents.
+const db = new Database();
const collection = db.collection("some-collection");
try {
const documents = await collection.documents(["abc123", "xyz456"]);
console.log(documents);
} catch (e: any) {
console.error("Could not find document");
}
+
+Deletes the collection from the database.
+Optional
options: DropCollectionOptionsOptions for dropping the collection.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.drop();
// The collection "some-collection" is now an ex-collection
+
+Deletes the index with the given name or id
from the database.
Index name, id or object with either property.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.dropIndex("some-index");
// The index "some-index" no longer exists
+
+Retrieves a list of all edges in this collection of the document matching
+the given selector
.
Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: DocumentEdgesOptionsOptions for retrieving the edges.
+const db = new Database();
const collection = db.collection("edges");
await collection.import([
["_key", "_from", "_to"],
["x", "vertices/a", "vertices/b"],
["y", "vertices/a", "vertices/c"],
["z", "vertices/d", "vertices/a"],
]);
const edges = await collection.edges("vertices/a");
console.log(edges.map((edge) => edge._key)); // ["x", "y", "z"]
+
+Creates a persistent index on the collection if it does not already exist.
+Options for creating the persistent index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create a unique index for looking up documents by username
await collection.ensureIndex({
type: "persistent",
fields: ["username"],
name: "unique-usernames",
unique: true
});
+
+Creates a TTL index on the collection if it does not already exist.
+Options for creating the TTL index.
+const db = new Database();
const collection = db.collection("some-collection");
// Expire documents with "createdAt" timestamp one day after creation
await collection.ensureIndex({
type: "ttl",
fields: ["createdAt"],
expireAfter: 60 * 60 * 24 // 24 hours
});
+
+const db = new Database();
const collection = db.collection("some-collection");
// Expire documents with "expiresAt" timestamp according to their value
await collection.ensureIndex({
type: "ttl",
fields: ["expiresAt"],
expireAfter: 0 // when attribute value is exceeded
});
+
+Creates a multi-dimensional index on the collection if it does not already exist.
+Options for creating the multi-dimensional index.
+const db = new Database();
const collection = db.collection("some-points");
// Create a multi-dimensional index for the attributes x, y and z
await collection.ensureIndex({
type: "mdi",
fields: ["x", "y", "z"],
fieldValueTypes: "double"
});
+
+
+
+Creates a geo index on the collection if it does not already exist.
+Options for creating the geo index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create an index for GeoJSON data
await collection.ensureIndex({
type: "geo",
fields: ["lngLat"],
geoJson: true
});
+
+Creates a inverted index on the collection if it does not already exist.
+Options for creating the inverted index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create an inverted index
await collection.ensureIndex({
type: "inverted",
fields: ["a", { name: "b", analyzer: "text_en" }]
});
+
+Creates an index on the collection if it does not already exist.
+Options for creating the index.
+const db = new Database();
const collection = db.collection("some-collection");
// Create a unique index for looking up documents by username
await collection.ensureIndex({
type: "persistent",
fields: ["username"],
name: "unique-usernames",
unique: true
});
+
+Retrieves statistics for a collection.
+Optional
details: booleanwhether to return extended storage engine-specific details +to the figures, which may cause additional load and impact performance
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.figures();
// data contains the collection's figures
+
+Retrieves general information about the collection.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.get();
// data contains general information about the collection
+
+Retrieves the shardId
of the shard responsible for the given document.
Document in the collection to look up the shardId
of.
const db = new Database();
const collection = db.collection("some-collection");
const responsibleShard = await collection.getResponsibleShard();
+
+Bulk imports the given data
into the collection.
The data to import, as an array of edge data.
+Optional
options: ImportDocumentsOptionsOptions for importing the data.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
[
{ _key: "x", _from: "vertices/a", _to: "vertices/b", weight: 1 },
{ _key: "y", _from: "vertices/a", _to: "vertices/c", weight: 2 }
]
);
+
+Bulk imports the given data
into the collection.
The data to import, as an array containing a single array of +attribute names followed by one or more arrays of attribute values for +each edge document.
+Optional
options: ImportDocumentsOptionsOptions for importing the data.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
[
[ "_key", "_from", "_to", "weight" ],
[ "x", "vertices/a", "vertices/b", 1 ],
[ "y", "vertices/a", "vertices/c", 2 ]
]
);
+
+Bulk imports the given data
into the collection.
If type
is omitted, data
must contain one JSON array per line with
+the first array providing the attribute names and all other arrays
+providing attribute values for each edge document.
If type
is set to "documents"
, data
must contain one JSON document
+per line.
If type
is set to "list"
, data
must contain a JSON array of
+edge documents.
If type
is set to "auto"
, data
can be in either of the formats
+supported by "documents"
or "list"
.
The data to import as a Buffer (Node), Blob (browser) or +string.
+Optional
options: ImportDocumentsOptions & { Options for importing the data.
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
'{"_key":"x","_from":"vertices/a","_to":"vertices/b","weight":1}\r\n' +
'{"_key":"y","_from":"vertices/a","_to":"vertices/c","weight":2}\r\n',
{ type: "documents" } // or "auto"
);
+
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
'[{"_key":"x","_from":"vertices/a","_to":"vertices/b","weight":1},' +
'{"_key":"y","_from":"vertices/a","_to":"vertices/c","weight":2}]',
{ type: "list" } // or "auto"
);
+
+const db = new Database();
const collection = db.collection("some-collection");
await collection.import(
'["_key","_from","_to","weight"]\r\n' +
'["x","vertices/a","vertices/b",1]\r\n' +
'["y","vertices/a","vertices/c",2]\r\n'
);
+
+Retrieves a list of all incoming edges of the document matching the given
+selector
.
Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: DocumentEdgesOptionsOptions for retrieving the edges.
+const db = new Database();
const collection = db.collection("edges");
await collection.import([
["_key", "_from", "_to"],
["x", "vertices/a", "vertices/b"],
["y", "vertices/a", "vertices/c"],
["z", "vertices/d", "vertices/a"],
]);
const edges = await collection.inEdges("vertices/a");
console.log(edges.map((edge) => edge._key)); // ["z"]
+
+Returns an index description by name or id
if it exists.
Index name, id or object with either property.
+const db = new Database();
const collection = db.collection("some-collection");
const index = await collection.index("some-index");
+
+Returns a list of all index descriptions for the collection.
+Optional
options: ListIndexesOptionsOptions for fetching the index list.
+const db = new Database();
const collection = db.collection("some-collection");
const indexes = await collection.indexes();
+
+const db = new Database();
const collection = db.collection("some-collection");
const allIndexes = await collection.indexes<HiddenIndexDescription>({
withHidden: true
});
+
+Instructs ArangoDB to load as many indexes of the collection into memory +as permitted by the memory limit.
+const db = new Database();
const collection = db.collection("indexed-collection");
await collection.loadIndexes();
// the indexes are now loaded into memory
+
+Retrieves a list of all outgoing edges of the document matching the given
+selector
.
Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: DocumentEdgesOptionsOptions for retrieving the edges.
+const db = new Database();
const collection = db.collection("edges");
await collection.import([
["_key", "_from", "_to"],
["x", "vertices/a", "vertices/b"],
["y", "vertices/a", "vertices/c"],
["z", "vertices/d", "vertices/a"],
]);
const edges = await collection.outEdges("vertices/a");
console.log(edges.map((edge) => edge._key)); // ["x", "y"]
+
+Retrieves the collection's properties.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.properties();
// data contains the collection's properties
+
+Replaces the properties of the collection.
+const db = new Database();
const collection = db.collection("some-collection");
const result = await collection.setProperties({ waitForSync: true });
// the collection will now wait for data being written to disk
// whenever a document is changed
+
+Instructs ArangoDB to recalculate the collection's document count to fix +any inconsistencies.
+const db = new Database();
const collection = db.collection("inconsistent-collection");
const badData = await collection.count();
// oh no, the collection count looks wrong -- fix it!
await collection.recalculateCount();
const goodData = await collection.count();
// goodData contains the collection's improved count
+
+Removes an existing document from the collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
Optional
options: RemoveDocumentOptionsOptions for removing the document.
+const db = new Database();
const collection = db.collection("friends");
const doc = await collection.document("musadir");
await collection.remove(doc);
// document with key "musadir" deleted
+
+Removes existing documents from the collection.
+Throws an exception when passed any document or _id
from a different
+collection.
Documents _key
, _id
or objects with either of those
+properties (e.g. documents from this collection).
Optional
options: RemoveDocumentOptionsOptions for removing the documents.
+const db = new Database();
const collection = db.collection("friends");
await collection.removeAll(["musadir", "salman"]);
// document with keys "musadir" and "salman" deleted
+
+Renames the collection and updates the instance's name
to newName
.
Additionally removes the instance from the databases.Database's internal +cache.
+Note: Renaming collections may not be supported when ArangoDB is +running in a cluster configuration.
+The new name of the collection.
+const db = new Database();
const collection1 = db.collection("some-collection");
await collection1.rename("other-collection");
const collection2 = db.collection("some-collection");
const collection3 = db.collection("other-collection");
// Note all three collection instances are different objects but
// collection1 and collection3 represent the same ArangoDB collection!
+
+Replaces an existing document in the collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
The contents of the new document.
+Optional
options: ReplaceDocumentOptionsOptions for replacing the document.
+const db = new Database();
const collection = db.collection("friends");
await collection.save(
{
_key: "musadir",
_from: "users/rana",
_to: "users/mudasir",
active: true,
best: true
}
);
const result = await collection.replace(
"musadir",
{ active: false },
{ returnNew: true }
);
console.log(result.new.active, result.new.best); // false undefined
+
+Replaces existing documents in the collection, identified by the _key
or
+_id
of each document.
The documents to replace.
+Optional
options: ReplaceDocumentOptionsOptions for replacing the documents.
+const db = new Database();
const collection = db.collection("friends");
await collection.save(
{
_key: "musadir",
_from: "users/rana",
_to: "users/mudasir",
active: true,
best: true
}
);
await collection.save(
{
_key: "salman",
_from: "users/rana",
_to: "users/salman",
active: false,
best: false
}
);
const result = await collection.replaceAll(
[
{ _key: "musadir", active: false },
{ _key: "salman", active: true, best: true }
],
{ returnNew: true }
);
console.log(result[0].new.active, result[0].new.best); // false undefined
console.log(result[1].new.active, result[1].new.best); // true true
+
+Retrieves the collection revision ID.
+const db = new Database();
const collection = db.collection("some-collection");
const data = await collection.revision();
// data contains the collection's revision
+
+Inserts a new document with the given data
into the collection.
The contents of the new document.
+Optional
options: InsertDocumentOptionsOptions for inserting the document.
+const db = new Database();
const collection = db.collection("friends");
const result = await collection.save(
{ _from: "users/rana", _to: "users/mudasir", active: false },
{ returnNew: true }
);
+
+Inserts new documents with the given data
into the collection.
The contents of the new documents.
+Optional
options: InsertDocumentOptionsOptions for inserting the documents.
+const db = new Database();
const collection = db.collection("friends");
const result = await collection.saveAll(
[
{ _from: "users/rana", _to: "users/mudasir", active: false },
{ _from: "users/rana", _to: "users/salman", active: true }
],
{ returnNew: true }
);
+
+Deletes all documents in the collection.
+Optional
options: TruncateCollectionOptionsconst db = new Database();
const collection = db.collection("some-collection");
await collection.truncate();
// millions of documents cry out in terror and are suddenly silenced,
// the collection "some-collection" is now empty
+
+Updates an existing document in the collection.
+Throws an exception when passed a document or _id
from a different
+collection.
Document _key
, _id
or object with either of those
+properties (e.g. a document from this collection).
The data for updating the document.
+Optional
options: UpdateDocumentOptionsOptions for updating the document.
+const db = new Database();
const collection = db.collection("friends");
await collection.save(
{
_key: "musadir",
_from: "users/rana",
_to: "users/mudasir",
active: true,
best: true
}
);
const result = await collection.update(
"musadir",
{ active: false },
{ returnNew: true }
);
console.log(result.new.active, result.new.best); // false true
+
+Updates existing documents in the collection, identified by the _key
or
+_id
of each document.
The data for updating the documents.
+Optional
options: UpdateDocumentOptionsOptions for updating the documents.
+const db = new Database();
const collection = db.collection("friends");
await collection.save(
{
_key: "musadir",
_from: "users/rana",
_to: "users/mudasir",
active: true,
best: true
}
);
await collection.save(
{
_key: "salman",
_from: "users/rana",
_to: "users/salman",
active: false,
best: false
}
);
const result = await collection.updateAll(
[
{ _key: "musadir", active: false },
{ _key: "salman", active: true, best: true }
],
{ returnNew: true }
);
console.log(result[0].new.active, result[0].new.best); // false true
console.log(result[1].new.active, result[1].new.best); // true true
+
+Processed response object.
+Readonly
arrayReadonly
blobReadonly
bodyReadonly
bodyReadonly
cloneReadonly
formThis method is not recommended for parsing multipart/form-data bodies in server environments. +It is recommended to use a library such as @fastify/busboy as follows:
+import { Busboy } from '@fastify/busboy'
import { Readable } from 'node:stream'
const response = await fetch('...')
const busboy = new Busboy({ headers: { 'content-type': response.headers.get('content-type') } })
// handle events emitted from `busboy`
Readable.fromWeb(response.body).pipe(busboy)
+
+Readonly
headersReadonly
jsonReadonly
okOptional
parsedParsed response body.
+Readonly
redirectedFetch request object.
+Readonly
statusReadonly
statusReadonly
textReadonly
typeReadonly
urlOptions for creating a stored value in an ArangoSearch View.
+Optional
cache(Enterprise Edition only.) If set to true
, then stored values will
+always be cached in memory.
Default: false
Optional
compressionHow the attribute values should be compressed.
+Default: "lz4"
Attribute paths for which values should be stored in the view index
+in addition to those used for sorting via primarySort
.
import type { VersionInfo } from "arangojs/administration";
+
+The "administration" module provides types for database administration.
+import type { Analyzer } from "arangojs/analyzers";
+
+The "analyzers" module provides Analyzer related types and interfaces +for TypeScript.
+import { aql } from "arangojs/aql";
+
+The "aql" module provides the aql template string handler and +helper functions, as well as associated types and interfaces for TypeScript.
+The aql function and namespace is also re-exported by the "index" module.
+import type { ClusterImbalanceInfo } from "arangojs/cluster";
+
+The "cluster" module provides types for cluster management.
+import type {
DocumentCollection,
EdgeCollection,
} from "arangojs/collections";
+
+The "collections" module provides collection related types and interfaces +for TypeScript.
+import type { ConfigOptions } from "arangojs/configuration";
+
+The "configuration" module provides configuration related types for +TypeScript.
+import type { ArangoApiResponse } from "arangojs/connection";
+
+The "connection" module provides connection related types for TypeScript.
+import type { Cursor, BatchCursor } from "arangojs/cursors";
+
+The "cursors" module provides cursor-related types and interfaces for +TypeScript.
+import type { Document, Edge } from "arangojs/documents";
+
+The "documents" module provides document/edge related types for TypeScript.
+import type { ArangoError, HttpError } from "arangojs/errors";
+
+The "errors" module provides types and interfaces for TypeScript related +to arangojs error handling.
+import type { FoxxManifest } from "arangojs/foxx-manifest";
+
+The "foxx-manifest" module provides the Foxx manifest type for TypeScript.
+Generated from JSON Schema
+using json-schema-to-typescript
.
import type {
Graph,
GraphVertexCollection,
GraphEdgeCollection,
} from "arangojs/graphs";
+
+The "graphs" module provides graph related types and interfaces +for TypeScript.
+import type { HotBackupOptions } from "arangojs/hot-backups";
+
+The "hot-backups" module provides types for managing hot backups.
+import arangojs, { aql, Database } from "arangojs";
+
+The "index" module is the default entry point when importing the arangojs +module or using the web build in the browser.
+If you are just getting started, you probably want to use the +arangojs function, which is also the default export of this module, +or the databases.Database class for which it is a wrapper.
+import type {
FulltextIndex,
GeoIndex,
MdiIndex,
PersistentIndex,
PrimaryIndex,
TtlIndex,
} from "arangojs/indexes";
+
+The "indexes" module provides index-related types for TypeScript.
+import type { Job } from "arangojs/jobs";
+
+The "jobs" module provides job-related types for TypeScript.
+import type { LogLevel } from "arangojs/logs";
+
+The "logs" module provides types for ArangoDB logs.
+import type { QueryOptions } from "arangojs/queries";
+
+The "query" module provides query related types for TypeScript.
+import type { Route } from "arangojs/routes";
+
+The "routes" module provides route related types and interfaces for +TypeScript.
+import type { ServiceInfo } from "arangojs/services";
+
+The "services" module provides types for Foxx services.
+import type { Transaction } from "arangojs/transactions";
+
+The "transactions" module provides transaction related types and interfaces +for TypeScript.
+import type { ArangoUser } from "arangojs/users";
+
+The "users" module provides types for ArangoDB users.
+import type { ArangoSearchView } from "arangojs/views";
+
+The "views" module provides View related types and interfaces for +TypeScript.
+Cluster deployment information for support purposes.
+ISO 8601 datetime string of when the information was requested.
+Information about the deployment.
+Number of agents in the cluster.
+Number of coordinators in the cluster.
+Number of DB-Servers in the cluster.
+Information about the servers in the cluster.
+Information about the shards in the cluster.
+Number of collections in the cluster.
+Number of followers in the cluster.
+Number of leaders in the cluster.
+Number of real leaders in the cluster.
+Number of servers in the cluster.
+Number of shards in the cluster.
+Deployment mode:
+"single"
: A single server deployment.
"cluster"
: A cluster deployment.
(Cluster only.) Information about the ArangoDB instance as well as the +host machine.
+Information about the storage engine.
+Optional
endianness?: "little" | "big"Endianness of the storage engine.
+Name of the storage engine.
+Optional
supports?: { Features supported by the storage engine.
+Optional
aliases?: { Aliases supported by the storage engine.
+Optional
indexes?: Record<string, string>Index type aliases supported by the storage engine.
+Optional
indexes?: string[]Index types supported by the storage engine.
+Server availability.
+"default"
: The server is operational.
"readonly"
: The server is in read-only mode.
false
: The server is not available.
Information about the server status.
+Optional
address?: string(Cluster Coordinators and DB-Servers only.) The address of the server.
+Optional
agency?: { (Cluster Coordinators and DB-Servers only.) Information about the Agency.
+Information about the communication with the Agency.
+A list of possible Agency endpoints.
+Optional
agent?: { (Cluster Agents only.) Information about the Agents.
+The endpoint of the queried Agent.
+Server ID of the queried Agent.
+Server ID of the leading Agent.
+Whether the queried Agent is the leader.
+The current term number.
+Optional
coordinator?: { (Cluster Coordinators only.) Information about the Coordinators.
+The server ID of the Coordinator that is the Foxx master.
+Whether the queried Coordinator is the Foxx master.
+Whether the Foxx API is enabled.
+A host identifier defined by the HOST or NODE_NAME environment variable, +or a fallback value using a machine identifier or the cluster/Agency address.
+Optional
hostname?: stringA hostname defined by the HOSTNAME environment variable.
+ArangoDB Edition.
+Server operation mode.
+use operationMode
instead
Server operation mode.
+The process ID of arangod.
+Server type.
+Information about the server status.
+Whether the maintenance mode is enabled.
+Optional
persisted(Cluster only.) The persisted ID.
+Startup and recovery information.
+Internal name of the feature that is currently being prepared, started, stopped or unprepared.
+Name of the lifecycle phase the instance is currently in.
+Current recovery sequence number value.
+Whether writes are disabled.
+Optional
reboot(Cluster only.) The reboot ID. Changes on every restart.
+Either "SINGLE", "COORDINATOR", "PRIMARY" (DB-Server), or "AGENT"
+Optional
server(Cluster Coordinators and DB-Servers only.) The server ID.
+Optional
state?: "STARTUP" | "SERVING" | "SHUTDOWN"(Cluster Coordinators and DB-Servers only.) Either "STARTUP", "SERVING", +or "SHUTDOWN".
+The server version string.
+Whether writes are enabled.
+Use readOnly
instead.
Single server deployment information for support purposes.
+ISO 8601 datetime string of when the information was requested.
+Information about the deployment.
+Deployment mode:
+"single"
: A single server deployment.
"cluster"
: A cluster deployment.
Result of retrieving database version information.
+Optional
details?: { Additional information about the ArangoDB server.
+ArangoDB license type or "edition".
+Value identifying the server type, i.e. "arango"
.
ArangoDB server version.
+An object describing an Analyzer.
+Name of a feature enabled for an Analyzer.
+An object describing an AQL Analyzer
+Token data type for an AQL Analyzer.
+Text case conversion type.
+(Enterprise Edition only.) An object describing a Classification Analyzer
+An object describing a Collation Analyzer
+Analyzer type and its type-specific properties.
+Options for creating an AQL Analyzer
+Optional
batchNumber between 1
and 1000
that determines the batch size for reading
+data from the query.
Default: 1
Optional
collapseIf set to true
, the position is set to 0
for all members of the query result array.
Default: false
Optional
keepIf set to false
, null
values will be discarded from the View index.
Default: true
Optional
memoryMemory limit for query execution in bytes.
+Default: 1048576
(1 MiB)
AQL query to be executed.
+Optional
returnData type of the returned tokens.
+Default: "string"
(Enterprise Edition only.) Options for creating a Classification Analyzer
+On-disk path to the trained fastText supervised model.
+Optional
threshold?: numberProbability threshold for which a label will be assigned to an input.
+Default: 0.99
Optional
top_Number of class labels that will be produced per input.
+Default: 1
Options for creating a Collation Analyzer
+Text locale.
+Format: language[_COUNTRY][.encoding][@variant]
Options for creating a Delimiter Analyzer.
+Options for creating a GeoJSON Analyzer
+Optional
options?: { Options for fine-tuning geo queries.
+Default: { maxCells: 20, minLevel: 4, maxLevel: 23 }
Optional
maxOptional
maxOptional
minOptional
type?: GeoTypeIf set to "centroid"
, only the centroid of the input geometry will be
+computed and indexed.
If set to "point"
only GeoJSON objects of type Point will be indexed and
+all other geometry types will be ignored.
Default: "shape"
Options for creating a GeoPoint Analyzer
+Optional
latitude?: string[]Attribute paths of the latitude value relative to the field for which the +Analyzer is defined in the View.
+Optional
longitude?: string[]Attribute paths of the longitude value relative to the field for which the +Analyzer is defined in the View.
+Optional
options?: { Options for fine-tuning geo queries.
+Default: { maxCells: 20, minLevel: 4, maxLevel: 23 }
Optional
maxOptional
minOptional
min(Enterprise Edition only.) Options for creating a Geo S2 Analyzer
+Optional
format?: GeoS2FormatIf set to "latLngDouble"
, each latitude and longitude value is stored
+as an 8-byte floating-point value (16 bytes per coordinate pair).
If set to "latLngInt"
, each latitude and longitude value is stored as
+a 4-byte integer value (8 bytes per coordinate pair).
If set to "s2Point"
, each longitude-latitude pair is stored in the
+native format of Google S2 (24 bytes per coordinate pair).
Default: "latLngDouble"
Optional
options?: { Options for fine-tuning geo queries.
+Default: { maxCells: 20, minLevel: 4, maxLevel: 23 }
Optional
maxOptional
maxOptional
minOptional
type?: GeoTypeIf set to "centroid"
, only the centroid of the input geometry will be
+computed and indexed.
If set to "point"
only GeoJSON objects of type Point will be indexed and
+all other geometry types will be ignored.
Default: "shape"
Options for creating an Identity Analyzer.
+(Enterprise Edition only.) Options for creating a MinHash Analyzer
+An Analyzer definition-like object with type
and properties
attributes.
Size of the MinHash signature.
+Options for creating a Multi-Delimiter Analyzer.
+This value will be used as delimiter to split text into tokens as +specified in RFC 4180, without starting new records on newlines.
+(Enterprise Edition only.) Options for creating a NearestNeighbors Analyzer.
+On-disk path to the trained fastText supervised model.
+Optional
top_Number of class labels that will be produced per input.
+Default: 1
Options for creating an Ngram Analyzer.
+Maximum n-gram length.
+Minimum n-gram length.
+Output the original value as well.
+Options for creating a Norm Analyzer.
+Optional
accent?: booleanPreserve accents in returned words.
+Default: false
Optional
case?: CaseConversionCase conversion.
+Default: "lower"
Text locale.
+Format: language[_COUNTRY][.encoding][@variant]
Options for creating a Pipeline Analyzer
+Definitions for Analyzers to chain in this Pipeline Analyzer.
+Options for creating a Segmentation Analyzer
+Optional
break?: SegmentationTokenTypeWhich tokens should be returned.
+Default: "alpha"
Optional
case?: CaseConversionWhat case all returned tokens should be converted to if applicable.
+Default: "none"
Options for creating a Stem Analyzer.
+Text locale.
+Format: language[_COUNTRY][.encoding][@variant]
Options for creating a Stopwords Analyzer
+Optional
hex?: booleanWhether stopword values should be interpreted as hex-encoded strings.
+Default: false
Array of strings that describe the tokens to be discarded.
+Options for creating a Text Analyzer.
+Optional
accent?: booleanPreserve accents in returned words.
+Default: false
Optional
case?: CaseConversionCase conversion.
+Default: "lower"
Optional
edgeIf present, then edge n-grams are generated for each token (word).
+Optional
max?: numberOptional
min?: numberOptional
preserveText locale.
+Format: language[_COUNTRY][.encoding][@variant]
Optional
stemming?: booleanApply stemming on returned words.
+Default: true
Optional
stopwords?: string[]Words to omit from result.
+Defaults to the words loaded from the file at stopwordsPath
.
Optional
stopwordsPath with a language
sub-directory containing files with words to omit.
Defaults to the path specified in the server-side environment variable
+IRESEARCH_TEXT_STOPWORD_PATH
or the current working directory of the
+ArangoDB process.
Options for creating a Wildcard Analyzer.
+Optional
analyzer?: Omit<CreateAnalyzerOptions, "features">An Analyzer definition-like object with type
and properties
attributes.
N-gram length. Must be a positive integer greater than or equal to 2.
+An object describing a Delimiter Analyzer.
+An object describing a GeoJSON Analyzer
+An object describing a GeoPoint Analyzer
+(Enterprise Edition only.) An object describing a GeoS2 Analyzer
+Storage format of a Geo S2 Analyzer.
+GeoJSON type.
+An object describing an Identity Analyzer.
+(Enterprise Edition only.) An object describing a MinHash Analyzer
+An object describing a Multi Delimiter Analyzer.
+(Enterprise Edition only.) An object describing a NearestNeighbors Analyzer
+An object describing an Ngram Analyzer.
+An object describing a Norm Analyzer.
+An object describing a Pipeline Analyzer
+An object describing a Segmentation Analyzer
+Token type for a Segmentation Analyzer.
+An object describing a Stem Analyzer.
+An object describing a Stopwords Analyzer
+An object describing a Text Analyzer.
+An object describing a Wildcard Analyzer
+Optional
analyzer?: Omit<AnalyzerDescription, "name" | "features">A value that can be used in an AQL template string or passed to an AQL +helper function.
+Information about a cluster imbalance.
+Information about the leader imbalance.
+The measure of the total imbalance. A high value indicates a high imbalance.
+The measure of the leader shard distribution. The higher the number, the worse the distribution.
+The number of leader shards per DB-Server.
+The ideal weight of leader shards per DB-Server.
+The sum of shards, counting leader shards only.
+The sum of all weights.
+The weight of leader shards per DB-Server. A leader has a weight of 1 by default but it is higher if collections can only be moved together because of distributeShardsLike
.
Information about the shard imbalance.
+The measure of the total imbalance. A high value indicates a high imbalance.
+The number of leader and follower shards per DB-Server.
+The size of shards per DB-Server.
+The ideal size of shards per DB-Server.
+The sum of shards, counting leader and follower shards.
+The sum of system collection shards, counting leader shards only.
+The sum of the sizes.
+Collection ID of the collection the shard belongs to.
+The server name from which to move.
+True if this is a leader move shard operation.
+Shard ID of the shard to be moved.
+The ID of the destination server.
+Options for rebalancing the cluster.
+Optional
databasesA list of database names to exclude from the analysis.
+Default: []
Optional
excludeIgnore system collections in the rebalance plan.
+Default: false
Optional
leaderAllow leader changes without moving data.
+Default: true
Optional
maximumMaximum number of moves to be computed.
+Default: 1000
Optional
moveAllow moving followers.
+Default: false
Optional
moveAllow moving leaders.
+Default: false
Optional
piDefault: 256**6
The result of a cluster rebalance.
+Expected imbalance after the suggested move shard operations are applied.
+Imbalance before the suggested move shard operations are applied.
+Suggested move shard operations.
+Information about the current state of the cluster imbalance.
+The number of pending move shard operations.
+The number of planned move shard operations.
+Options for retrieving a collection checksum.
+Optional
withIf set to true
, document data will be included in the calculation
+of the checksum.
Default: false
Optional
withIf set to true
, revision IDs will be included in the calculation
+of the checksum.
Default: false
General information about a collection.
+A globally unique identifier for this collection.
+Collection name.
+An integer indicating the collection loading status.
+An integer indicating the collection type.
+An object defining the collection's key generation.
+Optional
allowUnless set to false
, documents can be created with a user-specified
+_key
attribute.
Default: true
Optional
increment?: number(Autoincrement only.) How many steps to increment the key each time.
+Optional
offset?: number(Autoincrement only.) Initial offset for the key.
+Optional
type?: KeyGeneratorType of key generator to use.
+An object defining the collection's key generation.
+Whether documents can be created with a user-specified _key
attribute.
Optional
increment?: number(Autoincrement only.) How many steps to increment the key each time.
+Most recent key that has been generated.
+Optional
offset?: number(Autoincrement only.) Initial offset for the key.
+Type of key generator to use.
+An object defining the properties of a collection.
+Whether the in-memory hash cache is enabled for this collection.
+Computed values applied to documents in this collection.
+Optional
distribute(Enterprise Edition cluster only.) If set to a collection name, sharding +of the new collection will follow the rules for that collection. As long +as the new collection exists, the indicated collection can not be dropped.
+Optional
is(Enterprise Edition only.) Whether the SmartGraph this collection belongs to is disjoint.
+Optional
is(Enterprise Edition only.) Whether the collection is used in a SmartGraph or EnterpriseGraph.
+An object defining the collection's key generation.
+Optional
number(Cluster only.) Number of shards of this collection.
+Optional
replication(Cluster only.) Replication factor of the collection.
+Properties for validating documents in the collection.
+Optional
shard(Cluster only.) Keys of this collection that will be used for +sharding.
+Optional
sharding(Cluster only.) Sharding strategy of the collection.
+Optional
smart(Enterprise Edition cluster only.) Attribute used for sharding.
+Optional
smart(Enterprise Edition cluster only.) Attribute containing the shard key +value of the referred-to smart join collection.
+A human-readable representation of the collection loading status.
+Whether the newer revision-based replication protocol is enabled for +this collection.
+Whether data should be synchronized to disk before returning from +a document create, update, replace or removal operation.
+(Cluster only.) Write concern for this collection.
+Options for setting a collection's properties.
+See DocumentCollection#properties and EdgeCollection#properties.
+Optional
cacheWhether the in-memory hash cache is enabled for this collection.
+Default: false
Optional
computedComputed values to apply to documents in this collection.
+Optional
replication(Cluster only.) How many copies of each document should be kept in the +cluster.
+Default: 1
Optional
schema?: SchemaOptionsOptions for validating documents in this collection.
+Optional
waitIf set to true
, data will be synchronized to disk before returning from
+a document create, update, replace or removal operation.
Default: false
Optional
write(Cluster only.) Write concern for this collection.
+Options for creating a computed value.
+Optional
computeWhich operations should result in the value being computed.
+Default: ["insert", "update", "replace"]
AQL RETURN
expression that computes the value.
Note that when passing an AQL query object, the bindVars
will be ignored.
Optional
failWhether the write operation should fail if the expression produces a +warning.
+Default: false
Optional
keepIf set to false
, the field will be unset if the expression evaluates to
+null
. Otherwise the field will be set to the value null
. Has no effect
+if overwrite
is set to false
.
Default: true
Name of the target attribute of the computed value.
+Optional
overwrite?: booleanIf set to false
, the computed value will not be applied if the
+expression evaluates to null
.
Default: true
Properties defining a computed value.
+Which operations should result in the value being computed.
+AQL RETURN
expression that computes the value.
Whether the write operation should fail if the expression produces a +warning.
+If set to false
, the field will be unset if the expression evaluates to
+null
. Otherwise the field will be set to the value null
. Has no effect
+if overwrite
is set to false
.
Name of the target attribute of the computed value.
+If set to false
, the computed value will not be applied if the
+expression evaluates to null
.
Options for creating a collection.
+See databases.Database#createCollection, databases.Database#createEdgeCollection +and DocumentCollection#create or EdgeCollection#create.
+Optional
distribute(Enterprise Edition cluster only.) If set to a collection name, sharding +of the new collection will follow the rules for that collection. As long +as the new collection exists, the indicated collection can not be dropped.
+Optional
enforce(Cluster only.) Unless set to false
, the server will check whether
+enough replicas are available at creation time and bail out otherwise.
Default: true
Optional
keyAn object defining the collection's key generation.
+Optional
number(Cluster only.) Number of shards to distribute the collection across.
+Default: 1
Optional
shard(Cluster only.) Document attributes to use to determine the target shard +for each document.
+Default: ["_key"]
Optional
sharding(Cluster only.) Sharding strategy to use.
+Optional
smart(Enterprise Edition cluster only.) Attribute used for sharding.
+Optional
smart(Enterprise Edition cluster only.) Attribute containing the shard key +value of the referred-to smart join collection.
+Optional
wait(Cluster only.) Unless set to false
, the server will wait for all
+replicas to create the collection before returning.
Default: true
Options for dropping collections.
+Optional
isWhether the collection is a system collection. If the collection is a
+system collection, this option must be set to true
or ArangoDB will
+refuse to drop the collection.
Default: false
Type of key generator.
+Options for validating collection documents.
+Optional
level?: ValidationLevelWhen validation should be applied.
+Default: "strict"
Optional
message?: stringMessage to be used if validation fails.
+JSON Schema description of the validation schema for documents.
+Properties for validating documents in a collection.
+When validation should be applied.
+Message to be used if validation fails.
+JSON Schema description of the validation schema for documents.
+Type of document validation.
+Strategy for sharding a collection.
+Options for truncating collections.
+Optional
compact?: booleanWhether the collection should be compacted after truncation.
+Optional
waitWhether data should be synchronized to disk before returning from this +operation.
+When a validation should be applied.
+"none"
: No validation."new"
: Newly inserted documents are validated."moderate"
: New and modified documents are validated unless the modified
+document was already invalid."strict"
: New and modified documents are always validated.Write operation that can result in a computed value being computed.
+Credentials for HTTP Basic authentication.
+Optional
password?: stringPassword to use for authentication. Defaults to an empty string.
+Username to use for authentication, e.g. "root"
.
Credentials for HTTP Bearer token authentication.
+Bearer token to use for authentication.
+Options for configuring arangojs.
+Optional
agentIf set, arangojs will use the undici
+package to make requests and the provided options will be used to create
+the undici
agent.
See the undici
documentation
+for more information on the available options.
Optional
arangoNumeric representation of the ArangoDB version the driver should expect.
+The format is defined as XYYZZ
where X
is the major version, Y
is
+the zero-filled two-digit minor version and Z
is the zero-filled two-digit
+bugfix version, e.g. 30102
for 3.1.2, 20811
for 2.8.11.
Depending on this value certain methods may become unavailable or change +their behavior to remain compatible with different versions of ArangoDB.
+Default: 31100
Optional
auth?: BasicAuthCredentials | BearerAuthCredentialsCredentials to use for authentication.
+See also databases.Database#useBasicAuth and +databases.Database#useBearerAuth.
+Default: { username: "root", password: "" }
Optional
databaseName of the database to use.
+Default: "_system"
Optional
fetchDefault options to pass to the fetch
function when making requests.
See the Fetch API specification +or the MDN Web Docs +for more information on the available options.
+Optional
loadDetermines the behavior when multiple URLs are provided:
+"NONE"
: No load balancing. All requests will be handled by the first
+URL in the list until a network error is encountered. On network error,
+arangojs will advance to using the next URL in the list.
"ONE_RANDOM"
: Randomly picks one URL from the list initially, then
+behaves like "NONE"
.
"ROUND_ROBIN"
: Every sequential request uses the next URL in the list.
Default: "NONE"
Optional
onCallback that will be invoked when a request
+Error encountered when handling this request.
+Optional
poolMaximum number of parallel requests arangojs will perform. If any +additional requests are attempted, they will be enqueued until one of the +active requests has completed.
+Note: when using ROUND_ROBIN
load balancing and passing an array of
+URLs in the url
option, the default value of this option will be set to
+3 * url.length
instead of 3
.
Default: 3
Optional
precaptureIf set to true
, arangojs will generate stack traces every time a request
+is initiated and augment the stack traces of any errors it generates.
Warning: This will cause arangojs to generate stack traces in advance +even if the request does not result in an error. Generating stack traces +may negatively impact performance.
+Optional
responseLimits the number of values of server-reported response queue times that +will be stored and accessible using databases.Database#queueTime. If set to +a finite value, older values will be discarded to make room for new values +when that limit is reached.
+Default: 10
Optional
url?: string | string[]Base URL of the ArangoDB server or list of server URLs.
+When working with a cluster, the method databases.Database#acquireHostList +can be used to automatically pick up additional coordinators/followers at +any point.
+When running ArangoDB on a unix socket, e.g. /tmp/arangodb.sock
, the
+following URL formats are supported for unix sockets:
unix:///tmp/arangodb.sock
(no SSL)http+unix:///tmp/arangodb.sock
(or https+unix://
for SSL)http://unix:/tmp/arangodb.sock
(or https://unix:
for SSL)Additionally ssl
and tls
are treated as synonymous with https
and
+tcp
is treated as synonymous with http
, so the following URLs are
+considered identical:
tcp://127.0.0.1:8529
and http://127.0.0.1:8529
ssl://127.0.0.1:8529
and https://127.0.0.1:8529
tcp+unix:///tmp/arangodb.sock
and http+unix:///tmp/arangodb.sock
ssl+unix:///tmp/arangodb.sock
and https+unix:///tmp/arangodb.sock
tcp://unix:/tmp/arangodb.sock
and http://unix:/tmp/arangodb.sock
ssl://unix:/tmp/arangodb.sock
and https://unix:/tmp/arangodb.sock
See also auth
for passing authentication credentials.
Default: "http://127.0.0.1:8529"
Determines the behavior when multiple URLs are used:
+"NONE"
: No load balancing. All requests will be handled by the first
+URL in the list until a network error is encountered. On network error,
+arangojs will advance to using the next URL in the list.
"ONE_RANDOM"
: Randomly picks one URL from the list initially, then
+behaves like "NONE"
.
"ROUND_ROBIN"
: Every sequential request uses the next URL in the list.
Extends the given base type T
with the generic HTTP API response properties.
Interface representing an ArangoDB error response.
+Intended response status code as provided in the response body.
+Indicates that the request resulted in an error.
+Error message as provided in the response body.
+ArangoDB error code as provided in the response body.
+See the ArangoDB error documentation +for more information.
+Generic properties shared by all ArangoDB HTTP API responses.
+Response status code, typically 200
.
Indicates that the request was successful.
+Options available for requests made with the Fetch API.
+Optional
attribution(Browser only.) Controls the Attribution Reporting API specific behavior.
+See the MDN Web Docs +for more information on the available options.
+Optional
cache?: string(Browser only.) Cache mode to use for the request.
+See the Fetch API specification +or the MDN Web Docs +for more information on the available options.
+Optional
credentials?: "omit" | "include" | "same-origin"(Browser only.) Controls sending of credentials and cookies.
+"omit"
: Never send cookies."include"
: Always send cookies."same-origin"
: Only send cookies if the request is to the same origin.Default: "same-origin"
Optional
dispatcher?: any(Node.js only.) Undici Dispatcher
instance to use for the request.
Defaults to the global dispatcher.
+Optional
headers?: string[][] | Record<string, string | ReadonlyArray<string>> | HeadersHeaders object containing any additional headers to send with the request.
+Note that the Authorization
header will be overridden if the auth
+configuration option is set.
Optional
keepalive?: booleanControls whether the socket should be reused for subsequent requests.
+Default: false
Optional
mode?: string(Browser only.) Sets cross-origin behavior for the request.
+See the Fetch API specification +or the MDN Web Docs +for more information on the available options.
+Default: "cors"
Optional
priority?: "low" | "high" | "auto"(Browser only.) Request priority relative to other requests of the same type.
+See the Fetch API specification +or the MDN Web Docs +for more information on the available options.
+Default: "auto"
Optional
redirect?: "error" | "follow" | "manual"Controls what to do when the response status code is a redirect.
+"error"
: Abort with a network error."follow"
: Automatically follow redirects."manual"
: Abort with an HttpError
.Default: "follow"
Optional
referrer?: stringValue to use for the Referer
header.
If set to "about:client"
, the default value for the context in which the
+request is made will be used.
Default: "about:client"
Optional
referrer(Browser only.) Policy to use for the Referer
header, equivalent to the
+semantics of the Referrer-Policy
header.
See the Fetch API specification +or the MDN Web Docs +for more information on the available options.
+Options that can be shared globally for all requests made with arangojs.
+Optional
afterCallback that will be invoked when the server response has been received +and processed or when the request has been failed without a response.
+The originating request will be available as the request
property
+on either the error or response object.
Error encountered when handling this request or null
.
Optional
res: globalThis.Response & { Response object for this request, if no error occurred.
+Optional
allowWhether ArangoDB is allowed to perform a dirty read to respond to the
+request. If set to true
, the response may reflect a dirty state from
+a non-authoritative server.
Default: false
Optional
beforeCallback that will be invoked with the finished request object before it +is finalized. In the browser the request may already have been sent.
+Request object or XHR instance used for this request.
+Optional
maxDetermines the behavior when a request fails because the underlying
+connection to the server could not be opened
+(e.g. ECONNREFUSED
in Node.js):
false
: the request fails immediately.
0
: the request is retried until a server can be reached but only a
+total number of times matching the number of known servers (including
+the initial failed request).
any other number: the request is retried until a server can be reached
+or the request has been retried a total of maxRetries
number of times
+(not including the initial failed request).
When working with a single server, the retries (if any) will be made to +the same server.
+This setting currently has no effect when using arangojs in a browser.
+Note: Requests bound to a specific server (e.g. fetching query results) +will never be retried automatically and ignore this setting.
+Note: To set the number of retries when a write-write conflict is
+encountered, see retryOnConflict
instead.
Default: 0
Optional
retryIf set to a positive number, requests will automatically be retried at +most this many times if they result in a write-write conflict.
+Default: 0
Optional
timeout?: numberTime in milliseconds after which arangojs will abort the request if the +socket has not already timed out.
+Fetch-specific options for performing a request with arangojs.
+Optional
integrity?: `${string}-${string}`Subresource integrity value to use for the request, formatted as
+<hash-algorithm>-<hash-source>
.
Options for performing a request with arangojs.
+Optional
body?: anyRequest body data.
+Optional
expectIf set to true
, the response body will not be interpreted as JSON and
+instead passed as-is.
Optional
fetchAdditional options to pass to the fetch
function.
Optional
headers?: string[][] | Record<string, string | ReadonlyArray<string>> | HeadersHeaders object containing any additional headers to send with the request.
+Note that the Authorization
header will be overridden if the auth
+configuration option is set.
Optional
isIf set to true
, the request body will not be converted to JSON and
+instead passed as-is.
Optional
method?: stringHTTP method to use in order to perform the request.
+Default: "GET"
Optional
pathname?: stringURL path, relative to the server domain.
+Optional
search?: URLSearchParams | Record<string, any>URL parameters to pass as part of the query string.
+Additional information about the cursor.
+Optional
plan?: Record<string, any>Query execution plan for the executed query.
+Optional
profile?: Record<string, number>Additional profiling information for the executed query.
+Optional
stats?: CursorStatsAdditional statistics about the query execution.
+Warnings encountered while executing the query.
+Additional statics about the query execution of the cursor.
+Total number of index entries read from in-memory caches for indexes of +type edge or persistent.
+Total number of cache read attempts for index entries that could not be +served from in-memory caches for indexes of type edge or persistent.
+Total number of cursor objects created during query execution.
+Total number of times an existing cursor object was repurposed.
+Execution time of the query in seconds.
+Total number of documents that were removed after executing a filter condition in a FilterNode.
+Optional
fullTotal number of documents that matched the search condition if the query’s final top-level LIMIT statement were not present.
+Total number of cluster-internal HTTP requests performed.
+Optional
nodes?: { Runtime statistics per query execution node if profile
was set to 2
or greater.
Maximum memory usage of the query while it was running.
+Total number of documents iterated over when scanning a collection without an index.
+Total number of documents iterated over when scanning a collection using an index.
+Total number of data-modification operations successfully executed.
+Total number of data-modification operations that were unsuccessful, but have been ignored because of query option ignoreErrors.
+Options for creating a database.
+ +Optional
replication(Cluster only.) Default replication factor for new collections in this +database.
+Setting this to 1
disables replication. Setting this to "satellite"
+will replicate to every DBServer.
Optional
sharding?: "" | "flexible" | "single"(Cluster only.) The sharding method to use for new collections in the +database.
+Optional
users?: CreateDatabaseUserOptions[]Database users to create with the database.
+Optional
write(Cluster only.) Default write concern for new collections created in this +database.
+Object describing a database.
+See Database#get.
+Unique identifier of the database.
+Whether the database is the system database.
+Name of the database.
+File system path of the database.
+Optional
replication(Cluster only.) Default replication factor for new collections in this +database.
+Optional
sharding?: "" | "flexible" | "single"(Cluster only.) The sharding method to use for new collections in the +database.
+Optional
write(Cluster only.) Default write concern for new collections created in this +database.
+Options for retrieving multiple documents from a collection.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Optional
ignoreIf set to false
, the existing document will only be modified if its
+_rev
property matches the same property on the new data.
Default: true
Type representing a document stored in a collection.
+Type representing an object that can be stored in a collection.
+Options for retrieving a document's edges from a collection.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Result of retrieving edges in a collection.
+Options for checking whether a document exists in a collection.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Optional
ifIf set to a document revision, the document will only match if its _rev
+matches the given revision.
Optional
ifIf set to a document revision, the document will only match if its _rev
+does not match the given revision.
Common ArangoDB metadata properties of a document.
+Unique ID of the document, which is composed of the collection name
+and the document _key
.
Key of the document, which uniquely identifies the document within its +collection.
+Revision of the document data.
+Represents a bulk operation failure for an individual document.
+Indicates that the operation failed.
+Human-readable description of the failure.
+Numeric representation of the failure.
+Metadata returned by a document operation.
+Optional
_oldRevision of the document that was updated or replaced by this operation.
+A value that can be used to identify a document within a collection in
+arangojs methods, i.e. a partial ArangoDB document or the value of a
+document's _key
or _id
.
See DocumentMetadata.
+Type representing an edge document stored in an edge collection.
+Type representing an object that can be stored in an edge collection.
+ArangoDB metadata defining the relations of an edge document.
+Unique ID of the document that acts as the edge's start vertex.
+Unique ID of the document that acts as the edge's end vertex.
+Options for bulk importing documents into a collection.
+Optional
complete?: booleanIf set to true
, the import will abort if any error occurs.
Optional
details?: booleanWhether the response should contain additional details about documents +that could not be imported.
+Optional
from(Edge collections only.) Prefix to prepend to _from
attribute values.
Optional
onControls behavior when a unique constraint is violated on the document key.
+"error"
: the document will not be imported."update
: the document will be merged into the existing document."replace"
: the document will replace the existing document."ignore"
: the document will not be imported and the unique constraint
+error will be ignored.Default: "error"
Optional
overwrite?: booleanIf set to true
, the collection is truncated before the data is imported.
Default: false
Optional
to(Edge collections only.) Prefix to prepend to _to
attribute values.
Optional
waitWhether to wait for the documents to have been synced to disk.
+Result of a collection bulk import.
+Number of new documents imported.
+Optional
details?: string[]Additional details about any errors encountered during the import.
+Number of empty documents.
+Whether the import failed.
+Number of documents that failed with an error.
+Number of documents that failed with an error that is ignored.
+Number of documents updated.
+Options for inserting a new document into a collection.
+Optional
keepIf set to false
, properties with a value of null
will be removed from
+the new document.
Default: true
Optional
mergeIf set to false
, object properties that already exist in the old
+document will be overwritten rather than merged when an existing document
+with the same _key
or _id
is updated. This does not affect arrays.
Default: true
Optional
overwriteDefines what should happen if a document with the same _key
or _id
+already exists, instead of throwing an exception.
Default: `"conflict"
+Optional
refillIf set to true
, new entries will be added to in-memory index caches if
+document insertions affect the edge index or cache-enabled persistent
+indexes.
Default: false
Optional
returnIf set to true
, the complete new document will be returned as the new
+property on the result object. Has no effect if silent
is set to true
.
Default: false
Optional
returnIf set to true
, the complete old document will be returned as the old
+property on the result object. Has no effect if silent
is set to true
.
+This option is only available when overwriteMode
is set to "update"
or
+"replace"
.
Default: false
Optional
silent?: booleanIf set to true
, no data will be returned by the server. This option can
+be used to reduce network traffic.
Default: false
Optional
versionIf set, the attribute with the name specified by the option is looked up +in the stored document and the attribute value is compared numerically to +the value of the versioning attribute in the supplied document that is +supposed to update/replace it.
+Optional
waitIf set to true
, data will be synchronized to disk before returning.
Default: false
An object with an ArangoDB document _id
property.
See DocumentMetadata.
+An object with an ArangoDB document _key
property.
See DocumentMetadata.
+Type representing patch data for a given object type to represent a payload +ArangoDB can apply in a document PATCH request (i.e. a partial update).
+This differs from Partial
in that it also applies itself to any nested
+objects recursively.
Options for retrieving a document from a collection.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Optional
graceful?: booleanIf set to true
, null
is returned instead of an exception being thrown
+if the document does not exist.
Optional
ifIf set to a document revision, the request will fail with an error if the
+document exists but its _rev
does not match the given revision.
Optional
ifIf set to a document revision, the request will fail with an error if the
+document exists and its _rev
matches the given revision. Note that an
+HttpError
with code 304 will be thrown instead of an ArangoError
.
Options for removing a document from a collection.
+Optional
ifIf set to a document revision, the document will only be removed if its
+_rev
matches the given revision.
Optional
refillIf set to true
, existing entries in in-memory index caches will be
+deleted if document removals affect the edge index or cache-enabled
+persistent indexes.
Default: false
Optional
returnIf set to true
, the complete old document will be returned as the old
+property on the result object. Has no effect if silent
is set to true
.
Default: false
Optional
silent?: booleanIf set to true
, no data will be returned by the server. This option can
+be used to reduce network traffic.
Default: false
Optional
waitIf set to true
, changes will be synchronized to disk before returning.
Default: false
Options for replacing an existing document in a collection.
+Optional
ifIf set to a document revision, the document will only be replaced if its
+_rev
matches the given revision.
Optional
ignoreIf set to false
, the existing document will only be modified if its
+_rev
property matches the same property on the new data.
Default: true
Optional
refillIf set to true
, existing entries in in-memory index caches will be
+updated if document replacements affect the edge index or cache-enabled
+persistent indexes.
Default: false
Optional
returnIf set to true
, the complete new document will be returned as the new
+property on the result object. Has no effect if silent
is set to true
.
Default: false
Optional
returnIf set to true
, the complete old document will be returned as the old
+property on the result object. Has no effect if silent
is set to true
.
Default: false
Optional
silent?: booleanIf set to true
, no data will be returned by the server. This option can
+be used to reduce network traffic.
Default: false
Optional
versionIf set, the attribute with the name specified by the option is looked up +in the stored document and the attribute value is compared numerically to +the value of the versioning attribute in the supplied document that is +supposed to update/replace it.
+Optional
waitIf set to true
, data will be synchronized to disk before returning.
Default: false
Options for updating a document in a collection.
+Optional
ifIf set to a document revision, the document will only be updated if its
+_rev
matches the given revision.
Optional
ignoreIf set to false
, the existing document will only be modified if its
+_rev
property matches the same property on the new data.
Default: true
Optional
keepIf set to false
, properties with a value of null
will be removed from
+the new document.
Default: true
Optional
mergeIf set to false
, object properties that already exist in the old
+document will be overwritten rather than merged. This does not affect
+arrays.
Default: true
Optional
refillIf set to true
, existing entries in in-memory index caches will be
+updated if document updates affect the edge index or cache-enabled
+persistent indexes.
Default: false
Optional
returnIf set to true
, the complete new document will be returned as the new
+property on the result object. Has no effect if silent
is set to true
.
Default: false
Optional
returnIf set to true
, the complete old document will be returned as the old
+property on the result object. Has no effect if silent
is set to true
.
Default: false
Optional
silent?: booleanIf set to true
, no data will be returned by the server. This option can
+be used to reduce network traffic.
Default: false
Optional
versionIf set, the attribute with the name specified by the option is looked up +in the stored document and the attribute value is compared numerically to +the value of the versioning attribute in the supplied document that is +supposed to update/replace it.
+Optional
waitIf set to true
, data will be synchronized to disk before returning.
Default: false
A configuration option.
+Optional
default?: anyThe default value for this option in plain JSON. Can be omitted to provide no default value.
+Optional
description?: stringA human-readable description of the option.
+Optional
required?: booleanWhether the service can not function without this option. Defaults to true unless a default value is provided.
+The type of value expected for this option.
+A service dependency.
+Optional
description?: stringA description of how the API is used or why it is needed.
+Optional
multiple?: booleanWhether the dependency can be specified more than once.
+Optional
name?: stringName of the API the service expects.
+Optional
required?: booleanWhether the service can not function without this dependency.
+Optional
version?: stringThe semantic version ranges of the API the service expects.
+A service file asset.
+Optional
gzip?: booleanIf set to true the file will be served with gzip-encoding if supported by the client. This can be useful when serving text files like client-side JavaScript, CSS or HTML.
+Relative path of the file or folder within the service.
+Optional
type?: stringThe MIME content type of the file. Defaults to an intelligent guess based on the filename's extension.
+Schema for ArangoDB Foxx service manifests.
+Optional
author?: stringThe full name of the author of the service (i.e. you). This will be shown in the web interface.
+Optional
configuration?: Record<string, Configuration>An object defining the configuration options this service requires.
+Optional
contributors?: string[]A list of names of people that have contributed to the development of the service in some way. This will be shown in the web interface.
+Optional
defaultIf specified, the / (root) route of the service will automatically redirect to the given relative path, e.g. "index.html".
+Optional
dependencies?: Record<string, string | Dependency>The dependencies this service uses, i.e. which APIs its dependencies need to be compatible with.
+Optional
description?: stringA human-readable description of the service. This will be shown in the web interface.
+Optional
engines?: Record<string, string> & { An object indicating the semantic version ranges of ArangoDB (or compatible environments) the service will be compatible with.
+Optional
files?: Record<string, string | File>An object defining file assets served by this service.
+Optional
keywords?: string[]A list of keywords that help categorize this service. This is used by the Foxx Store installers to organize services.
+Optional
lib?: stringThe relative path to the Foxx JavaScript files in the service, e.g. "lib". Defaults to the folder containing this manifest.
+Optional
license?: stringA string identifying the license under which the service is published, ideally in the form of an SPDX license identifier. This will be shown in the web interface.
+Optional
main?: stringThe relative path to the main entry point of this service (relative to lib), e.g. "index.js".
+Optional
name?: stringThe name of the Foxx service. This will be shown in the web interface.
+Optional
provides?: Record<string, string>The dependencies this provides, i.e. which APIs it claims to be compatible with.
+Optional
scripts?: Record<string, string>An object defining named scripts provided by this service, which can either be used directly or as queued jobs by other services.
+Optional
tests?: string | string[]A path/pattern or list of paths/patterns of JavaScript tests provided for this service.
+Optional
thumbnail?: stringThe filename of a thumbnail that will be used alongside the service in the web interface. This should be a JPEG or PNG image that looks good at sizes 50x50 and 160x160.
+Optional
version?: stringThe version number of the Foxx service. The version number must follow the semantic versioning format. This will be shown in the web interface.
+Options for adding an edge definition to a graph.
+Optional
satellites?: (string | ArangoCollection)[](Enterprise Edition cluster only.) Collections to be included in a Hybrid +SmartGraph.
+Options for adding a vertex collection to a graph.
+Optional
satellites?: (string | ArangoCollection)[](Enterprise Edition cluster only.) Collections to be included in a Hybrid +SmartGraph.
+Option for creating a graph.
+Optional
is(Enterprise Edition cluster only.) If set to true
, the graph will be
+created as a Disjoint SmartGraph.
Default: false
Optional
is(Enterprise Edition cluster only.) If set to true
, the graph will be
+created as a SmartGraph.
Default: false
Optional
number(Cluster only.) Number of shards that is used for every collection +within this graph.
+Has no effect when replicationFactor
is set to "satellite"
.
Optional
orphanAdditional vertex collections. Documents within these collections do not +have edges within this graph.
+Optional
replication(Cluster only.) Replication factor used when initially creating +collections for this graph.
+Default: 1
Optional
satellites?: (string | ArangoCollection)[](Enterprise Edition cluster only.) Collections to be included in a Hybrid +SmartGraph.
+Optional
smart(Enterprise Edition cluster only.) Attribute containing the shard key +value to use for smart sharding.
+Optional
waitIf set to true
, the request will wait until all modifications have been
+synchronized to disk before returning successfully.
Default: false
Optional
write(Cluster only.) Write concern for new collections in the graph.
+Has no effect when replicationFactor
is set to "satellite"
.
Definition of a relation in a Graph.
+Name of the collection containing the edges.
+Array of names of collections containing the start vertices.
+Array of names of collections containing the end vertices.
+An edge definition used to define a collection of edges in a Graph.
+Collection containing the edges.
+Collection or collections containing the start vertices.
+Collection or collections containing the end vertices.
+General information about a graph.
+Definitions for the relations of the graph.
+Optional
is(Enterprise Edition cluster only.) If set to true
, the graph has been
+created as a Disjoint SmartGraph.
Optional
is(Enterprise Edition cluster only.) If set to true
, the graph is a
+SatelliteGraph.
Optional
is(Enterprise Edition cluster only.) If set to true
, the graph has been
+created as a SmartGraph.
Name of the graph.
+Optional
number(Cluster only.) Number of shards that is used for every collection +within this graph.
+Additional vertex collections. Documents within these collections do not +have edges within this graph.
+Optional
replication(Cluster only.) Replication factor used when initially creating +collections for this graph.
+Optional
smart(Enterprise Edition cluster only.) Attribute containing the shard key +value to use for smart sharding.
+Optional
write(Cluster only.) Write concern for new collections in the graph.
+Options for inserting a document into a graph collection.
+Optional
returnIf set to true
, the complete new document will be returned as the new
+property on the result object.
Default: false
Optional
waitIf set to true
, data will be synchronized to disk before returning.
Default: false
Options for retrieving a document from a graph collection.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Default: false
Optional
graceful?: booleanIf set to true
, null
is returned instead of an exception being thrown
+if the document does not exist.
Default: false
Optional
rev?: stringIf set to a document revision, the document will only be returned if its
+_rev
property matches this value.
See also documents.DocumentMetadata.
+Options for removing a document from a graph collection.
+Optional
returnIf set to true
, the complete old document will be returned as the old
+property on the result object.
Default: false
Optional
rev?: stringIf set to a document revision, the document will only be removed if its
+_rev
property matches this value.
See also documents.DocumentMetadata.
+Optional
waitIf set to true
, data will be synchronized to disk before returning.
Default: false
Options for replacing an edge definition in a graph.
+Optional
satellites?: string[](Enterprise Edition cluster only.) Collections to be included in a Hybrid +SmartGraph.
+Options for replacing a document in a graph collection.
+Optional
keepIf set to false
, properties with a value of null
will be removed from
+the new document.
Default: true
Optional
returnIf set to true
, the complete new document will be returned as the new
+property on the result object.
Default: false
Optional
returnIf set to true
, the complete old document will be returned as the old
+property on the result object.
Default: false
Optional
rev?: stringIf set to a document revision, the document will only be modified if its
+_rev
property matches this value.
See also documents.DocumentMetadata.
+Optional
waitIf set to true
, data will be synchronized to disk before returning.
Default: false
(Enterprise Edition only.) List of known hot backups.
+(Enterprise Edition only.) Options for creating a hot backup.
+Optional
allowIf set to true
and no global transaction lock can be acquired within the
+given timeout, a possibly inconsistent backup is taken.
Default: false
Optional
force?: boolean(Enterprise Edition cluster only.) If set to true
and no global
+transaction lock can be acquired within the given timeout, all running
+transactions are forcefully aborted to ensure that a consistent backup
+can be created.
Default: false
.
Optional
label?: stringLabel to appended to the backup's identifier.
+Default: If omitted or empty, a UUID will be generated.
+Optional
timeout?: numberTime in seconds that the operation will attempt to get a consistent +snapshot.
+Default: 120
.
(Enterprise Edition only.) Result of a hot backup.
+An object representing an arangosearch index.
+Optional
figures?: Record<string, any>Options for creating a geo index.
+Optional
geoIf set to true
, fields
must be an array containing a single attribute
+path and the attribute value must be an array with two values, the first
+of which will be interpreted as the longitude and the second of which
+will be interpreted as the latitude of the document.
If set to false
, fields
can be either an array containing two
+attribute paths, the first of which will be interpreted as the latitude
+and the second as the longitude, or a single attribute path for an array
+containing two values, the first of which will be interpreted as the
+latitude, the second as the longitude.
Default: false
Optional
legacyIf set to true
, the index will use pre-3.10 rules for parsing
+GeoJSON polygons. This option is always implicitly true
when using
+ArangoDB 3.9 or lower.
Options for creating an index.
+Options for creating an inverted index.
+Optional
analyzer?: stringName of the default Analyzer to apply to the values of indexed fields.
+Default: "identity"
Optional
cache?: boolean(Enterprise Edition only.) If set to true
, then field normalization
+values will always be cached in memory.
Default: false
Optional
cleanupWait at least this many commits between removing unused files in the +ArangoSearch data directory.
+Default: 2
Optional
commitWait at least this many milliseconds between committing View data store +changes and making documents visible to queries.
+Default: 1000
Optional
consolidationWait at least this many milliseconds between applying
+consolidationPolicy
to consolidate View data store and possibly release
+space on the filesystem.
Default: 1000
Optional
consolidationThe consolidation policy to apply for selecting which segments should be +merged.
+Default: { type: "tier" }
Optional
features?: AnalyzerFeature[]List of Analyzer features to enable for the default Analyzer.
+Defaults to the Analyzer's features.
+Optional
includeIf set to true
, all document attributes are indexed, excluding any
+sub-attributes configured in the fields
array. The analyzer
and
+features
properties apply to the sub-attributes. This option only
+applies when using the index in a SearchAlias View.
Default: false
Optional
optimizeAn array of strings defining sort expressions to optimize.
+Optional
parallelism?: numberThe number of threads to use for indexing the fields.
+Default: 2
Optional
primary(Enterprise Edition only.) If set to true
, then the primary key column
+will always be cached in memory.
Default: false
Optional
primaryPrimary sort order to optimize AQL queries using a matching sort order.
+Optional
searchIf set to true
array values will by default be indexed using the same
+behavior as ArangoSearch Views. This option only applies when using the
+index in a SearchAlias View.
Default: false
Optional
storedAn array of attribute paths that will be stored in the index but can not +be used for index lookups or sorting but can avoid full document lookups.
+Optional
trackIf set to true
, the position of values in array values are tracked and
+need to be specified in queries. Otherwise all values in an array are
+treated as equivalent. This option only applies when using the index in a
+SearchAlias View.
Default: false
Optional
writeMaximum number of concurrent active writers (segments) that perform a +transaction.
+Default: 0
(disabled)
Optional
writeMaximum number of writers (segments) cached in the pool.
+Default: 64
Optional
writeMaximum memory byte size per writer (segment) before a writer (segment) +flush is triggered.
+Default: 33554432
(32 MiB)
Options for creating a MDI index.
+Data type of the dimension attributes.
+Optional
unique?: booleanIf set to true
, a unique index will be created.
Default: false
Options for creating a persistent index.
+Optional
cacheIf set to true
, an in-memory hash cache will be put in front of the
+persistent index.
Default: false
Optional
deduplicate?: booleanIf set to false
, inserting duplicate index values from the same
+document will lead to a unique constraint error if this is a unique index.
Default: true
Optional
estimates?: booleanIf set to false
, index selectivity estimates will be disabled for this
+index.
Default: true
Optional
sparse?: booleanIf set to true
, the index will omit documents that do not contain at
+least one of the attribute paths in fields
and these documents will be
+ignored for uniqueness checks.
Default: false
Optional
storedAn array of attribute paths that will be stored in the index but can not +be used for index lookups or sorting but can avoid full document lookups.
+Optional
unique?: booleanIf set to true
, a unique index will be created.
Default: false
Options for creating a TTL index.
+Duration in seconds after the attribute value at which the document will +be considered as expired.
+An object representing a geo index.
+An object representing a potentially hidden index.
+This type can be used to cast the result of collection.indexes
to better
+reflect the actual data returned by the server when using the withHidden
+option:
const indexes = await collection.indexes<HiddenIndex>({
withHidden: true
}));
// indexes may include internal indexes and indexes with a "progress"
// property
+
+Optional
progress?: numberProgress of this index if it is still being created.
+An object representing an index.
+Shared attributes of all index types.
+An array of attribute paths.
+Optional
figures?: Record<string, any>Additional stats about this index.
+A unique identifier for this index.
+A unique name for this index.
+Whether documents not containing at least one of the attribute paths +are omitted by this index.
+Type of this index.
+Whether this index enforces uniqueness for values of its attribute paths.
+Index name, id or object with a name
or id
property.
An object representing an internal index.
+Type of an internal index.
+An object representing an inverted index.
+Optional
cache?: booleanOptional
primaryOptional
cache?: booleanAn object representing a field in an inverted index.
+Optional
analyzer?: stringOptional
cache?: booleanOptional
features?: AnalyzerFeature[]Optional
includeOptional
nested?: InvertedIndexNestedField[]Optional
searchOptional
trackOptions for an attribute path in an inverted index.
+Optional
analyzer?: stringName of the Analyzer to apply to the values of this field.
+Defaults to the analyzer
specified on the index itself.
Optional
cache?: boolean(Enterprise Edition only.) If set to true
, then field normalization
+values will always be cached in memory.
Defaults to the value of cache
specified on the index itself.
Optional
features?: AnalyzerFeature[]List of Analyzer features to enable for this field's Analyzer.
+Defaults to the features of the Analyzer.
+Optional
includeIf set to true
, all document attributes are indexed, excluding any
+sub-attributes configured in the fields
array. The analyzer
and
+features
properties apply to the sub-attributes. This option only
+applies when using the index in a SearchAlias View.
Defaults to the value of includeAllFields
specified on the index itself.
An attribute path.
+Optional
nested?: (string | InvertedIndexNestedFieldOptions)[](Enterprise Edition only.) Sub-objects to index to allow querying for +co-occurring values.
+Optional
searchIf set to true
array values will be indexed using the same behavior as
+ArangoSearch Views. This option only applies when using the index in a
+SearchAlias View.
Defaults to the value of searchField
specified on the index itself.
Optional
trackIf set to true
, the position of values in array values are tracked and
+need to be specified in queries. Otherwise all values in an array are
+treated as equivalent. This option only applies when using the index in a
+SearchAlias View.
Defaults to the value of trackListPositions
specified on the index
+itself.
(Enterprise Edition only.) An object representing a nested field in an +inverted index.
+Optional
analyzer?: stringOptional
features?: AnalyzerFeature[]Optional
nested?: InvertedIndexNestedField[]Optional
search(Enterprise Edition only.) Options for a nested field in an inverted index.
+Optional
analyzer?: stringName of the Analyzer to apply to the values of this field.
+Defaults to the analyzer
specified on the parent options or on the index
+itself.
Optional
features?: AnalyzerFeature[]List of Analyzer features to enable for this field's Analyzer.
+Defaults to the features of the Analyzer.
+An attribute path.
+Optional
nested?: (string | InvertedIndexNestedFieldOptions)[]Sub-objects to index to allow querying for co-occurring values.
+Optional
searchIf set to true
array values will be indexed using the same behavior as
+ArangoSearch Views. This option only applies when using the index in a
+SearchAlias View.
Defaults to the value of searchField
specified on the index itself.
Options for defining a primary sort field on an inverted index.
+Optional
cache?: boolean(Enterprise Edition only.) If set to true
, then primary sort columns
+will always be cached in memory.
Default: false
Optional
compression?: CompressionHow the primary sort data should be compressed.
+Default: "lz4"
An array of fields to sort the index by.
+Options for defining a stored value on an inverted index.
+Optional
cache?: boolean(Enterprise Edition only.) If set to true
, then stored values will
+always be cached in memory.
Default: false
Optional
compression?: CompressionHow the attribute values should be compressed.
+Default: "lz4"
The attribute paths to store.
+Options for listing indexes.
+Optional
withIf set to true
, includes internal indexes as well as indexes that are
+not yet fully built but are in the building phase.
You should cast the resulting indexes to HiddenIndex
to ensure internal
+and incomplete indexes are accurately represented.
Default: false
.
Optional
withIf set to true
, includes additional information about each index.
Default: false
An object representing a MDI index.
+An object with an id
property.
An object with a name
property.
An object representing a persistent index.
+Optional
storedAn object representing a primary index.
+An object representing a system index.
+An object representing a TTL index.
+An object representing a list of log entries.
+Options for retrieving log entries.
+Optional
level?: LogLevel | LogLevelLabel | Lowercase<LogLevelLabel>If set, only log entries with this log level will be returned.
+Optional
offset?: numberIf set, this many log entries will be skipped.
+Optional
search?: stringIf set, only log entries containing the specified text will be returned.
+Optional
size?: numberIf set, only this many entries will be returned.
+Optional
sort?: LogSortDirectionIf set to "desc"
, log entries will be returned in reverse chronological
+order.
Default: "asc"
.
Optional
start?: numberIf set, only log entries with an lid
greater than or equal to this value
+will be returned.
Optional
upto?: LogLevel | LogLevelLabel | Lowercase<LogLevelLabel>Maximum log level of the entries to retrieve.
+Default: INFO
.
String representation of the logging level of a log entry.
+An object representing a single log entry.
+Log sorting direction, ascending or descending.
+Options for explaining a query.
+See Database#explain.
+Optional
allIf set to true, all possible execution plans will be returned as the
+plans
property. Otherwise only the optimal execution plan will be
+returned as the plan
property.
Default: false
Optional
maxMaximum number of plans that the optimizer is allowed to generate. +Setting this to a low value limits the amount of work the optimizer does.
+Optional
optimizer?: { An object with a rules
property specifying a list of optimizer rules to
+be included or excluded by the optimizer for this query. Prefix a rule
+name with +
to include it, or -
to exclude it. The name all
acts as
+an alias matching all optimizer rules.
Plan explaining query execution.
+Information about collections involved in the query.
+Total estimated cost of the plan.
+Estimated number of items returned by the query.
+Whether the query is a data modification query.
+Execution nodes in this plan.
+Rules applied by the optimizer.
+Variables used in the query.
+Optimizer statistics for an explained query.
+Time in seconds needed to explain the query.
+Maximum memory usage in bytes of the query during explain.
+Total number of plans created.
+Total number of rules executed for this query.
+Number of rules skipped for this query.
+Result of explaining a query with multiple plans.
+Whether it would be possible to cache the query.
+Query plans.
+Optimizer statistics for the explained query.
+Warnings encountered while planning the query execution.
+Result of parsing a query.
+Abstract syntax tree (AST) of the query.
+Names of all bind parameters used in the query.
+Names of all collections involved in the query.
+Whether the query was parsed.
+Object describing a query.
+Bind parameters used in the query.
+Name of the database the query runs in.
+Unique identifier for this query.
+Maximum memory usage in bytes of the query.
+Query string (potentially truncated).
+Query's running time in seconds.
+Date and time the query was started.
+Query's current execution state.
+Whether the query uses a streaming cursor.
+Name of the user that started the query.
+Optimizer rule for AQL queries.
+Options for executing a query.
+ +Optional
allowIf set to true
, the query will be executed with support for dirty reads
+enabled, permitting ArangoDB to return a potentially dirty or stale result
+and arangojs will load balance the request without distinguishing between
+leaders and followers.
Note that dirty reads are only supported for read-only queries, not data
+modification queries (e.g. using INSERT
, UPDATE
, REPLACE
or
+REMOVE
) and only when using ArangoDB 3.4 or later.
Default: false
Optional
allowIf set to true
, cursor results will be stored by ArangoDB in such a way
+that batch reads can be retried in the case of a communication error.
Default: false
Optional
batchNumber of result values to be transferred by the server in each +network roundtrip (or "batch").
+Must be greater than zero.
+Optional
cache?: booleanIf set to false
, the AQL query results cache lookup will be skipped for
+this query.
Default: true
Optional
count?: booleanUnless set to false
, the number of result values in the result set will
+be returned in the count
attribute. This may be disabled by default in
+a future version of ArangoDB if calculating this value has a performance
+impact for some queries.
Default: true
.
Optional
failIf set to true
, the query will throw an exception and abort if it would
+ otherwise produce a warning.
Optional
fillIf set to false
, the query data will not be stored in the RocksDB block
+cache. This can be used to avoid thrashing he block cache when reading a
+lot of data.
Optional
fullIf set to true
and the query has a LIMIT
clause, the total number of
+values matched before the last top-level LIMIT
in the query was applied
+will be returned in the extra.stats.fullCount
attribute.
Optional
intermediateMaximum number of operations after which an intermediate commit is +automatically performed.
+Optional
intermediateMaximum total size of operations in bytes after which an intermediate +commit is automatically performed.
+Optional
maxControls after how many execution nodes in a query a stack split should be +performed.
+Default: 250
(200
on macOS)
Optional
maxLimits the maximum number of plans that will be created by the AQL query +optimizer.
+Optional
maxMaximum allowed execution time before the query will be killed in seconds.
+If set to 0
, the query will be allowed to run indefinitely.
Default: 0
Optional
maxMaximum size of transactions in bytes.
+Optional
maxLimits the maximum number of warnings a query will return.
+Optional
memoryMaximum memory size in bytes that the query is allowed to use. +Exceeding this value will result in the query failing with an error.
+If set to 0
, the memory limit is disabled.
Default: 0
Optional
optimizer?: { An object with a rules
property specifying a list of optimizer rules to
+be included or excluded by the optimizer for this query. Prefix a rule
+name with +
to include it, or -
to exclude it. The name all
acts as
+an alias matching all optimizer rules.
Optional
profile?: boolean | numberIf set to 1
or true
, additional query profiling information will be
+returned in the extra.profile
attribute if the query is not served from
+the result cache.
If set to 2
, the query will return execution stats per query plan node
+in the extra.stats.nodes
attribute. Additionally the query plan is
+returned in extra.plan
.
Optional
retryIf set to a positive number, the query will automatically be retried at +most this many times if it results in a write-write conflict.
+Default: 0
Optional
satellite(Enterprise Edition cluster only.) Limits the maximum time in seconds a +DBServer will wait to bring satellite collections involved in the query +into sync. Exceeding this value will result in the query being stopped.
+Default: 60
Optional
skip(Enterprise Edition cluster only.) If set to true
, collections
+inaccessible to current user will result in an access error instead
+of being treated as empty.
Optional
stream?: booleanIf set to true
, the query will be executed as a streaming query.
Optional
timeout?: numberMaximum time in milliseconds arangojs will wait for a server response. +Exceeding this value will result in the request being cancelled.
+Note: Setting a timeout for the client does not guarantee the query
+will be killed by ArangoDB if it is already being executed. See the
+maxRuntime
option for limiting the execution time within ArangoDB.
Optional
ttl?: numberTime-to-live for the cursor in seconds. The cursor results may be +garbage collected by ArangoDB after this much time has passed.
+Default: 30
Information about query tracking.
+Whether query tracking is enabled.
+Maximum query string length in bytes that is kept in the list.
+Maximum number of slow queries that is kept in the list.
+Threshold execution time in seconds for when a query is +considered slow.
+Whether bind parameters are being tracked along with queries.
+Whether slow queries are being tracked.
+Options for query tracking.
+See Database#queryTracking.
+Optional
enabled?: booleanIf set to false
, neither queries nor slow queries will be tracked.
Optional
maxMaximum query string length in bytes that will be kept in the list.
+Optional
maxMaximum number of slow queries to be kept in the list.
+Optional
slowThreshold execution time in seconds for when a query will be +considered slow.
+Optional
trackIf set to true
, bind parameters will be tracked along with queries.
Optional
trackIf set to true
and enabled
is also set to true
, slow queries will be
+tracked if their execution time exceeds slowQueryThreshold
.
Result of explaining a query with a single plan.
+Whether it would be possible to cache the query.
+Query plan.
+Optimizer statistics for the explained query.
+Warnings encountered while planning the query execution.
+Definition of an AQL User Function.
+Implementation of the AQL User Function.
+Whether the function is deterministic.
+See Database#createFunction.
+Name of the AQL User Function.
+Options for installing the service.
+See Database#installService.
+Optional
configuration?: Record<string, any>An object mapping configuration option names to values.
+See also Database#getServiceConfiguration.
+Optional
dependencies?: Record<string, string>An object mapping dependency aliases to mount points.
+See also Database#getServiceDependencies.
+Optional
development?: booleanWhether the service should be installed in development mode.
+See also Database#setServiceDevelopmentMode.
+Default: false
Optional
legacy?: booleanWhether the service should be installed in legacy compatibility mode
+This overrides the engines
option in the service manifest (if any).
Default: false
Optional
setup?: booleanWhether the "setup" script should be executed.
+Default: true
Object describing a multi-service dependency defined by a Foxx service.
+Optional
current?: string[]Current mount points the dependency is resolved to.
+Optional
description?: stringHuman-readable description of the dependency.
+Whether this is a multi-service dependency.
+Name of the service the dependency expects to match.
+Whether the dependency must be matched in order for the service +to be operational.
+Formatted name of the dependency.
+Version of the service the dependency expects to match.
+Options for replacing a service.
+See Database#replaceService.
+Optional
configuration?: Record<string, any>An object mapping configuration option names to values.
+See also Database#getServiceConfiguration.
+Optional
dependencies?: Record<string, string>An object mapping dependency aliases to mount points.
+See also Database#getServiceDependencies.
+Optional
development?: booleanWhether the service should be installed in development mode.
+See also Database#setServiceDevelopmentMode.
+Default: false
Optional
force?: booleanIf set to true
, replacing a service that does not already exist will
+fall back to installing the new service.
Default: false
Optional
legacy?: booleanWhether the service should be installed in legacy compatibility mode
+This overrides the engines
option in the service manifest (if any).
Default: false
Optional
setup?: booleanWhether the "setup" script should be executed.
+Default: true
Optional
teardown?: booleanWhether the existing service's "teardown" script should be executed +prior to removing that service.
+Default: true
Object describing a configuration option of a Foxx service.
+Processed current value of the configuration option as exposed in the +service code.
+Current value of the configuration option as stored internally.
+Optional
default?: anyDefault value of the configuration option.
+Optional
description?: stringHuman-readable description of the configuration option.
+Whether the configuration option must be set in order for the service +to be operational.
+Formatted name of the configuration option.
+Data type of the configuration value.
+Note: "int"
and "bool"
are historical synonyms for "integer"
and
+"boolean"
. The "password"
type is synonymous with "string"
but can
+be used to distinguish values which should not be displayed in plain text
+by software when managing the service.
Object describing a Foxx service in detail.
+Internal checksum of the service's initial source bundle.
+Whether development mode is enabled for this service.
+Whether the service is running in legacy compatibility mode.
+Content of the service manifest of this service.
+Service mount point, relative to the database.
+Optional
name?: stringName defined in the service manifest.
+Options for this service.
+Configuration values set for this service.
+Service dependency configuration of this service.
+File system path of the service.
+Optional
version?: stringVersion defined in the service manifest.
+Object briefly describing a Foxx service.
+Whether development mode is enabled for this service.
+Whether the service is running in legacy compatibility mode.
+Service mount point, relative to the database.
+Optional
name?: stringName defined in the service manifest.
+Service dependencies the service expects to be able to match as a mapping +from dependency names to versions the service is compatible with.
+Optional
version?: stringVersion defined in the service manifest.
+Test results for a Foxx service's tests using the default reporter.
+Test results for a single test case using the default reporter.
+Optional
err?: stringTest stats for a Foxx service's tests.
+Total test duration in milliseconds.
+Number of tests that failed.
+Number of tests that ran successfully.
+Number of tests skipped or not executed.
+Total number of tests found.
+Test results for a Foxx service's tests using the stream reporter.
+Test results for a single test case using the stream reporter.
+Optional
err?: stringTest results for a single test suite using the suite reporter.
+Test results for a Foxx service's tests using the suite reporter.
+Test results for a single test case using the suite reporter.
+Optional
err?: anyTest results for a Foxx service's tests in TAP format.
+Test results for a Foxx service's tests in XUnit format using the JSONML +representation.
+Test results for a single test case in XUnit format using the JSONML +representation.
+Object describing a single-service dependency defined by a Foxx service.
+Optional
current?: stringCurrent mount point the dependency is resolved to.
+Optional
description?: stringHuman-readable description of the dependency.
+Whether this is a multi-service dependency.
+Name of the service the dependency expects to match.
+Whether the dependency must be matched in order for the service +to be operational.
+Formatted name of the dependency.
+Version of the service the dependency expects to match.
+OpenAPI 2.0 description of a Foxx service.
+Options for uninstalling a service.
+See Database#uninstallService.
+Optional
force?: booleanIf set to true
, uninstalling a service that does not already exist
+will be considered successful.
Default: false
Optional
teardown?: booleanWhether the service's "teardown" script should be executed +prior to removing that service.
+Default: true
Options for upgrading a service.
+See Database#upgradeService.
+Optional
configuration?: Record<string, any>An object mapping configuration option names to values.
+See also Database#getServiceConfiguration.
+Optional
dependencies?: Record<string, string>An object mapping dependency aliases to mount points.
+See also Database#getServiceDependencies.
+Optional
development?: booleanWhether the service should be installed in development mode.
+See also Database#setServiceDevelopmentMode.
+Default: false
Optional
force?: booleanUnless set to true
, upgrading a service that does not already exist will
+fall back to installing the new service.
Default: false
Optional
legacy?: booleanWhether the service should be installed in legacy compatibility mode
+This overrides the engines
option in the service manifest (if any).
Default: false
Optional
setup?: booleanWhether the "setup" script should be executed.
+Default: true
Optional
teardown?: booleanWhether the existing service's "teardown" script should be executed +prior to upgrading that service.
+Default: false
Options for how the transaction should be aborted.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Collections involved in a transaction.
+Optional
exclusive?: (string | ArangoCollection)[] | string | ArangoCollectionAn array of collections or a single collection that will be read from or +written to during the transaction with no other writes being able to run +in parallel.
+Optional
read?: (string | ArangoCollection)[] | string | ArangoCollectionAn array of collections or a single collection that will be read from +during the transaction.
+Optional
write?: (string | ArangoCollection)[] | string | ArangoCollectionAn array of collections or a single collection that will be read from or +written to during the transaction.
+Options for how the transaction should be committed.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Details for a transaction.
+See also TransactionInfo.
+Unique identifier of the transaction.
+Status (or "state") of the transaction.
+Status of a given transaction.
+See also TransactionDetails.
+Unique identifier of the transaction.
+Status of the transaction.
+Options for how the transaction should be performed.
+Optional
allowIf set to true
, the request will explicitly permit ArangoDB to return a
+potentially dirty or stale result and arangojs will load balance the
+request without distinguishing between leaders and followers.
Optional
allowWhether the transaction may read from collections not specified for this
+transaction. If set to false
, accessing any collections not specified
+will result in the transaction being aborted to avoid potential deadlocks.
Default: true
.
Optional
lockDetermines how long the database will wait while attempting to gain locks +on collections used by the transaction before timing out.
+Optional
maxDetermines the transaction size limit in bytes.
+Optional
skipIf set to true
, the fast lock round will be skipped, which makes each
+locking operation take longer but guarantees deterministic locking order
+and may avoid deadlocks when many concurrent transactions are queued and
+try to access the same collection with an exclusive lock.
Optional
waitDetermines whether to force the transaction to write all data to disk +before returning.
+Access level for an ArangoDB user's access to a collection or database.
+Properties of an ArangoDB user object.
+Whether the ArangoDB user account is enabled and can authenticate.
+Additional information to store about this user.
+ArangoDB username of the user.
+Database user to create with a database.
+Optional
active?: booleanWhether the user is active.
+Default: true
Optional
extra?: Record<string, any>Additional data to store with the user object.
+Optional
passwd?: stringPassword of the user to create.
+Default: ""
Username of the user to create.
+Options for accessing or manipulating access levels.
+Optional
collection?: ArangoCollection | stringThe collection to access or manipulate the access level of.
+Optional
database?: Database | stringThe database to access or manipulate the access level of.
+If collection
is an ArangoCollection
, this option defaults to the
+database the collection is contained in. Otherwise this option defaults to
+the current database.
Options for modifying an ArangoDB user.
+Optional
active?: booleanWhether the ArangoDB user account is enabled and can authenticate.
+Default: true
Optional
extra?: Record<string, any>Additional information to store about this user.
+Default: {}
Password the ArangoDB user will use for authentication.
+A link definition for an ArangoSearch View.
+Optional
nested?: Record<string, ArangoSearchViewLink>A link definition for an ArangoSearch View.
+Optional
analyzers?: string[]A list of names of Analyzers to apply to values of processed document +attributes.
+Default: ["identity"]
Optional
cache?: boolean(Enterprise Edition only.) If set to true
, then field normalization
+values will always be cached in memory.
Default: false
Optional
fields?: Record<string, ArangoSearchViewLinkOptions>An object mapping names of attributes to process for each document to +ArangoSearchViewLinkOptions definitions.
+Optional
inIf set to true
, then no exclusive lock is used on the source collection
+during View index creation, so that it remains basically available.
Default: false
Optional
includeIf set to true
, all document attributes will be processed, otherwise
+only the attributes in fields
will be processed.
Default: false
Optional
nested?: Record<string, ArangoSearchViewLinkOptions>(Enterprise Edition only.) An object mapping attribute names to +ArangoSearchViewLinkOptions definitions to index sub-objects +stored in an array.
+Optional
storeControls how the view should keep track of the attribute values.
+Default: "none"
Optional
trackIf set to true
, the position of values in array values will be tracked,
+otherwise all values in an array will be treated as equal alternatives.
Properties of an ArangoSearch View.
+Options for modifying the properties of an ArangoSearch View.
+Optional
cleanupHow many commits to wait between removing unused files.
+Default: 2
Optional
commitHow long to wait between commiting View data store changes and making +documents visible to queries.
+Default: 1000
Optional
consolidationHow long to wait between applying the consolidationPolicy
.
Default: 10000
Optional
consolidationConsolidation policy to apply for selecting which segments should be +merged.
+Default: { type: "tier" }
Optional
links?: Record<string, Omit<ArangoSearchViewLinkOptions, "nested">>An object mapping names of linked collections to +ArangoSearchViewLinkOptions definitions.
+Policy to consolidate based on segment byte size and live document count as +dictated by the customization attributes.
+Optional
threshold?: numberMust be in the range of 0.0
to 1.0
.
Type of consolidation policy.
+The bytes_accum
consolidation policy was deprecated in
+ArangoDB 3.7 and should be replaced with the tier
consolidation policy.
Compression for storing data.
+Options for creating an ArangoSearch View.
+Options for creating a primary sort in an ArangoSearch View.
+If set to "asc"
, the primary sorting order will be ascending.
+If set to "desc"
, the primary sorting order will be descending.
Attribute path for the value of each document to use for +sorting.
+If set to true
, the primary sorting order will be ascending.
+If set to false
, the primary sorting order will be descending.
Attribute path for the value of each document to use for +sorting.
+Options for creating a SearchAlias View.
+Options for creating a View.
+Sorting direction. Descending or ascending.
+Options defining an index used in a SearchAlias View.
+Name of a collection.
+Name of an inverted index in the collection.
+Properties of a SearchAlias View.
+Options for modifying the properties of a SearchAlias View.
+An array of inverted indexes to add to the View.
+Policy to consolidate if the sum of all candidate segment byte size is less +than the total segment byte size multiplied by a given threshold.
+Optional
minConsolidation candidates with a score less than this value will be +filtered out.
+Default: 0
Optional
segmentsSize below which all segments are treated as equivalent.
+Default: 2097152
(2 MiB)
Optional
segmentsMaximum allowed size of all consolidation segments.
+Default: 5368709120
(5 GiB)
Optional
segmentsMaximum number of segments that are evaluated as candidates for +consolidation.
+Default: 10
Optional
segmentsMinimum number of segments that are evaluated as candidates for +consolidation.
+Default: 1
Type of consolidation policy.
+Options for partially modifying the properties of an ArangoSearch View.
+Options defining an index to be modified in a SearchAlias View.
+Optional
operation?: "add" | "del"Whether to add or remove the index.
+Default: "add"
Options for partially modifying the properties of a SearchAlias View.
+An array of inverted indexes to add to the View.
+Options for partially modifying a View's properties.
+Generic description of a View.
+A globally unique identifier for this View.
+An identifier for this View.
+Name of the View.
+Type of the View.
+Options for replacing a View's properties.
+
Represents an Analyzer in a databases.Database.
+