Skip to content

Commit

Permalink
Merge branch 'elastic:main' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
nipundev authored Aug 31, 2023
2 parents 8223d23 + 5e93c97 commit b36981a
Show file tree
Hide file tree
Showing 569 changed files with 7,448 additions and 1,986 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
import org.elasticsearch.compute.operator.Operator;
import org.elasticsearch.core.TimeValue;
import org.elasticsearch.xpack.esql.evaluator.EvalMapper;
import org.elasticsearch.xpack.esql.evaluator.predicate.operator.comparison.Equals;
import org.elasticsearch.xpack.esql.expression.function.scalar.date.DateTrunc;
import org.elasticsearch.xpack.esql.expression.function.scalar.math.Abs;
import org.elasticsearch.xpack.esql.expression.function.scalar.multivalue.MvMin;
Expand All @@ -27,7 +28,6 @@
import org.elasticsearch.xpack.ql.expression.FieldAttribute;
import org.elasticsearch.xpack.ql.expression.Literal;
import org.elasticsearch.xpack.ql.expression.predicate.operator.arithmetic.Add;
import org.elasticsearch.xpack.ql.expression.predicate.operator.comparison.Equals;
import org.elasticsearch.xpack.ql.tree.Source;
import org.elasticsearch.xpack.ql.type.DataTypes;
import org.elasticsearch.xpack.ql.type.EsField;
Expand Down
17 changes: 17 additions & 0 deletions branches.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
{
"notice": "This file is not maintained outside of the main branch and should only be used for tooling.",
"branches": [
{
"branch": "main"
},
{
"branch": "8.10"
},
{
"branch": "8.9"
},
{
"branch": "7.17"
}
]
}
5 changes: 5 additions & 0 deletions docs/changelog/98470.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 98470
summary: Reduce verbosity of the bulk indexing audit log
area: Audit
type: enhancement
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/98528.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 98528
summary: "ESQL: Add support for TEXT fields in comparison operators and SORT"
area: ES|QL
type: enhancement
issues:
- 98642
6 changes: 6 additions & 0 deletions docs/changelog/98870.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 98870
summary: "ESQL: Add ability to perform date math"
area: ES|QL
type: enhancement
issues:
- 98402
5 changes: 5 additions & 0 deletions docs/changelog/98944.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 98944
summary: Auto-normalize `dot_product` vectors at index & query
area: Vector Search
type: enhancement
issues: []
5 changes: 5 additions & 0 deletions docs/changelog/98961.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
pr: 98961
summary: Fix NPE when `GetUser` with profile uid before profile index exists
area: Security
type: bug
issues: []
6 changes: 6 additions & 0 deletions docs/changelog/98987.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
pr: 98987
summary: EQL and ESQL to use only the necessary fields in the internal `field_caps`
calls
area: EQL
type: enhancement
issues: []
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ can change the length of a token, the `trim` filter does _not_ change a token's
offsets.

The `trim` filter uses Lucene's
https://lucene.apache.org/core/{lucene_version_path}/analyzers-common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html[TrimFilter].
https://lucene.apache.org/core/{lucene_version_path}/analysis/common/org/apache/lucene/analysis/miscellaneous/TrimFilter.html[TrimFilter].

[TIP]
====
Expand Down Expand Up @@ -110,4 +110,4 @@ PUT trim_example
}
}
}
----
----
2 changes: 1 addition & 1 deletion docs/reference/docs/reindex.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ conflict.

IMPORTANT: Because data streams are <<data-streams-append-only,append-only>>,
any reindex request to a destination data stream must have an `op_type`
of`create`. A reindex can only add new documents to a destination data stream.
of `create`. A reindex can only add new documents to a destination data stream.
It cannot update existing documents in a destination data stream.

By default, version conflicts abort the `_reindex` process.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/esql/functions/signature/ceil.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/reference/esql/functions/signature/left.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/reference/esql/functions/types/ceil.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[%header.monospaced.styled,format=dsv,separator=|]
|===
arg1 | result
n | result
double | double
integer | integer
long | long
Expand Down
5 changes: 5 additions & 0 deletions docs/reference/esql/functions/types/left.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
[%header.monospaced.styled,format=dsv,separator=|]
|===
arg1 | arg2 | result
keyword | integer | keyword
|===
7 changes: 4 additions & 3 deletions docs/reference/esql/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -95,9 +95,8 @@ POST /_query?format=txt
[discrete]
==== {kib}

{esql} can be used in Discover to explore a data set, and in Lens to visualize it.
First, enable the `enableTextBased` setting in *Advanced Settings*. Next, in
Discover or Lens, from the data view dropdown, select *{esql}*.
Use {esql} in Discover to explore a data set. From the data view dropdown,
select *Try {esql}* to get started.

NOTE: {esql} queries in Discover and Lens are subject to the time range selected
with the time filter.
Expand Down Expand Up @@ -136,6 +135,8 @@ include::aggregation-functions.asciidoc[]

include::multivalued-fields.asciidoc[]

include::metadata-fields.asciidoc[]

include::task-management.asciidoc[]

:esql-tests!:
Expand Down
55 changes: 55 additions & 0 deletions docs/reference/esql/metadata-fields.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
[[esql-metadata-fields]]
== {esql} metadata fields

++++
<titleabbrev>Metadata fields</titleabbrev>
++++

{esql} can access <<mapping-fields, metadata fields>>. The currently
supported ones are:

* <<mapping-index-field,`_index`>>: the index to which the document belongs.
The field is of the type <<keyword, keyword>>.

* <<mapping-id-field,`_id`>>: the source document's ID. The field is of the
type <<keyword, keyword>>.

* `_version`: the source document's version. The field is of the type
<<number,long>>.

To enable the access to these fields, the <<esql-from,`FROM`>> source command needs
to be provided with a dedicated directive:

[source,esql]
----
FROM index [METADATA _index, _id]
----

Metadata fields are only available if the source of the data is an index.
Consequently, `FROM` is the only source commands that supports the `METADATA`
directive.

Once enabled, the fields are then available to subsequent processing commands, just
like the other index fields:

[source.merge.styled,esql]
----
include::{esql-specs}/metadata-ignoreCsvTests.csv-spec[tag=multipleIndices]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/metadata-ignoreCsvTests.csv-spec[tag=multipleIndices-result]
|===

Also, similar to the index fields, once an aggregation is performed, a
metadata field will no longer be accessible to subsequent commands, unless
used as grouping field:

[source.merge.styled,esql]
----
include::{esql-specs}/metadata-ignoreCsvTests.csv-spec[tag=metaIndexInAggs]
----
[%header.monospaced.styled,format=dsv,separator=|]
|===
include::{esql-specs}/metadata-ignoreCsvTests.csv-spec[tag=metaIndexInAggs-result]
|===
7 changes: 7 additions & 0 deletions docs/reference/esql/source-commands/from.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -27,3 +27,10 @@ or aliases:
----
FROM employees-00001,employees-*
----

Use the `METADATA` directive to enable <<esql-metadata-fields,metadata fields>>:

[source,esql]
----
FROM employees [METADATA _id]
----
2 changes: 1 addition & 1 deletion docs/reference/graph/explore.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -387,7 +387,7 @@ To spider out, you need to specify two things:
* The set of vertices you already know about that you want to exclude from the
results of the spidering operation.

You specify this information using `include`and `exclude` clauses. For example,
You specify this information using `include` and `exclude` clauses. For example,
the following request starts with the product `1854873` and spiders
out to find additional search terms associated with that product. The terms
"midi", "midi keyboard", and "synth" are excluded from the results.
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/health/health.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -413,7 +413,7 @@ watermark threshold>>.

`unhealthy_policies`::
(map) A detailed view on the policies that are considered unhealthy due to having
several consecutive unssuccesful invocations.
several consecutive unsuccessful invocations.
The `count` key represents the number of unhealthy policies (int).
The `invocations_since_last_success` key will report a map where the unhealthy policy
name is the key and it's corresponding number of failed invocations is the value.
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/how-to/knn-search.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ options.
The `cosine` option accepts any float vector and computes the cosine
similarity. While this is convenient for testing, it's not the most efficient
approach. Instead, we recommend using the `dot_product` option to compute the
similarity. To use `dot_product`, all vectors need to be normalized in advance
to have length 1. The `dot_product` option is significantly faster, since it
similarity. When using `dot_product`, all vectors are normalized during index to have
a magnitude of 1. The `dot_product` option is significantly faster, since it
avoids performing extra vector length computations during the search.

[discrete]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/ilm/error-handling.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ You can use the <<ilm-explain-lifecycle,{ilm-init} Explain API>> to monitor the
[discrete]
==== How `min_age` is calculated

When setting up an <<set-up-lifecycle-policy,{ilm-init} policy>> or <<getting-started-index-lifecycle-management,automating rollover with {ilm-init}>>, be aware that`min_age` can be relative to either the rollover time or the index creation time.
When setting up an <<set-up-lifecycle-policy,{ilm-init} policy>> or <<getting-started-index-lifecycle-management,automating rollover with {ilm-init}>>, be aware that `min_age` can be relative to either the rollover time or the index creation time.

If you use <<ilm-rollover,{ilm-init} rollover>>, `min_age` is calculated relative to the time the index was rolled over. This is because the <<indices-rollover-index,rollover API>> generates a new index. The `creation_date` of the new index (retrievable via <<indices-get-settings>>) is used in the calculation. If you do not use rollover in the {ilm-init} policy, `min_age` is calculated relative to the `creation_date` of the original index.

Expand Down
10 changes: 5 additions & 5 deletions docs/reference/mapping/types/dense-vector.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -159,9 +159,9 @@ Computes the dot product of two vectors. This option provides an optimized way
to perform cosine similarity. The constraints and computed score are defined
by `element_type`.
+
When `element_type` is `float`, all vectors must be unit length, including both
document and query vectors. The document `_score` is computed as
`(1 + dot_product(query, vector)) / 2`.
When `element_type` is `float`, all vectors are automatically converted to unit length, including both
document and query vectors. Consequently, `dot_product` does not allow vectors with a zero magnitude.
The document `_score` is computed as `(1 + dot_product(query, vector)) / 2`.
+
When `element_type` is `byte`, all vectors must have the same
length including both document and query vectors or results will be inaccurate.
Expand All @@ -171,9 +171,9 @@ where `dims` is the number of dimensions per vector.
`cosine`:::
Computes the cosine similarity. Note that the most efficient way to perform
cosine similarity is to normalize all vectors to unit length, and instead use
cosine similarity is to have all vectors normalized to unit length, and instead use
`dot_product`. You should only use `cosine` if you need to preserve the
original vectors and cannot normalize them in advance. The document `_score`
original vectors and cannot allow Elasticsearch to normalize them. The document `_score`
is computed as `(1 + cosine(query, vector)) / 2`. The `cosine` similarity does
not allow vectors with zero magnitude, since cosine is not defined in this
case.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -230,7 +230,7 @@ Remove the `http.content_type.required` setting from `elasticsearch.yml`. Specif
[%collapsible]
====
*Details* +
The `http.tcp_no_delay` setting was deprecated in 7.x and has been removed in 8.0. Use`http.tcp.no_delay` instead.
The `http.tcp_no_delay` setting was deprecated in 7.x and has been removed in 8.0. Use `http.tcp.no_delay` instead.
*Impact* +
Replace the `http.tcp_no_delay` setting with `http.tcp.no_delay`.
Expand All @@ -246,7 +246,7 @@ The `network.tcp.connect_timeout` setting was deprecated in 7.x and has been rem
was a fallback setting for `transport.connect_timeout`.
*Impact* +
Remove the`network.tcp.connect_timeout` setting.
Remove the `network.tcp.connect_timeout` setting.
Use the `transport.connect_timeout` setting to change the default connection
timeout for client connections. Specifying
`network.tcp.connect_timeout` in `elasticsearch.yml` will result in an
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Update your workflow and applications to use the `ilm` package in place of
To create `Fuzziness` instances, use the `fromString` and `fromEdits` method
instead of the `build` method that used to accept both Strings and numeric
values. Several fuzziness setters on query builders (e.g.
MatchQueryBuilder#fuzziness) now accept only a `Fuzziness`instance instead of
MatchQueryBuilder#fuzziness) now accept only a `Fuzziness` instance instead of
an Object.
Fuzziness used to be lenient when it comes to parsing arbitrary numeric values
Expand Down
15 changes: 10 additions & 5 deletions docs/reference/modules/cluster/remote-clusters-api-key.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,15 @@ information, refer to https://www.elastic.co/subscriptions.

===== On the remote cluster

// tag::remote-cluster-steps[]
. Enable the remote cluster server port on every node of the remote cluster by
setting `remote_cluster_server.enabled` to `true` in `elasticsearch.yml`. The
port number defaults to `9443` and can be configured with the
`remote_cluster.port` setting. Refer to <<remote-cluster-network-settings>>.

. Next, generate a CA and a server certificate/key pair. On one of the nodes
of the remote cluster, from the directory where {es} has been installed:
. Next, generate a certificate authority (CA) and a server certificate/key pair.
On one of the nodes of the remote cluster, from the directory where {es} has
been installed:

.. Create a CA, if you don't have a CA already:
+
Expand Down Expand Up @@ -137,16 +139,18 @@ When prompted, enter the `CERT_PASSWORD` from the earlier step.

. Restart the remote cluster.

. On the remote cluster, generate a cross-cluster API key using the
. On the remote cluster, generate a cross-cluster API key that provides access
to the indices you want to use for {ccs} or {ccr}. You can use the
<<security-api-create-cross-cluster-api-key>> API or
{kibana-ref}/api-keys.html[Kibana]. Grant the key the required access for {ccs}
or {ccr}.
{kibana-ref}/api-keys.html[Kibana].

. Copy the encoded key (`encoded` in the response) to a safe location. You will
need it to connect to the remote cluster later.
// end::remote-cluster-steps[]

===== On the local cluster

// tag::local-cluster-steps[]
. On every node of the local cluster:

.. Copy the `ca.crt` file generated on the remote cluster earlier into the
Expand All @@ -159,6 +163,7 @@ need it to connect to the remote cluster later.
xpack.security.remote_cluster_client.ssl.enabled: true
xpack.security.remote_cluster_client.ssl.certificate_authorities: [ "remote-cluster-ca.crt" ]
----
// end::local-cluster-steps[]

.. Add the cross-cluster API key, created on the remote cluster earlier, to the
keystore:
Expand Down
Loading

0 comments on commit b36981a

Please sign in to comment.