diff --git a/README.adoc b/README.adoc index ac43e1f12..a50e3d1df 100644 --- a/README.adoc +++ b/README.adoc @@ -111,7 +111,7 @@ Add the `spring-data-aerospike` Maven dependency: Notes: -* It is recommended to use the latest version of Spring Data Aerospike, checkout Spring Data Aerospike +* It is recommended to use the latest version. Check out Spring Data Aerospike https://github.com/aerospike/spring-data-aerospike/releases[GitHub releases] * Spring Data Aerospike uses the https://github.com/aerospike/aerospike-client-java[Aerospike Java Client] (and https://github.com/aerospike/aerospike-client-java-reactive[Aerospike Reactive Java Client]) under the hood @@ -148,8 +148,8 @@ In order to configure Spring Data Aerospike you will need to create a configurat annotation. To set the connection details you can either override `getHosts()` and `nameSpace()` methods -of the `AbstractAerospikeDataConfiguration` class or define `spring-data-aerospike.connection.hosts` and -`spring-data-aerospike.connection.namespace` in `application.properties` file. +of the `AbstractAerospikeDataConfiguration` class or define `spring.aerospike.hosts` and +`spring.data.aerospike.namespace` in `application.properties` file. NOTE: You can further customize your configuration by changing other xref:#configuration[`settings`]. diff --git a/src/main/asciidoc/index.adoc b/src/main/asciidoc/index.adoc index 4fdd4e915..155777e89 100644 --- a/src/main/asciidoc/index.adoc +++ b/src/main/asciidoc/index.adoc @@ -1,3 +1,4 @@ +[[aerospike.index]] = Spring Data Aerospike - Documentation :doctype: book :revnumber: 4.8.0 @@ -5,7 +6,6 @@ :toc: :toc-placement!: :toclevels: 1 -:spring-data-commons-docs-online: https://docs.spring.io/spring-data/commons/docs/current/reference/html (C) 2018-2024 The original authors. @@ -26,28 +26,24 @@ include::preface.adoc[] include::reference/functionality.adoc[] include::reference/installation-and-usage.adoc[] -include::spring-data-commons-docs/repositories.adoc[] include::reference/aerospike-repositories.adoc[] include::reference/aerospike-reactive-repositories.adoc[] -include::spring-data-commons-docs/repository-projections.adoc[] include::reference/aerospike-projections.adoc[] -include::spring-data-commons-docs/query-by-example.adoc[] include::reference/query-methods-preface.adoc[] include::reference/query-methods-simple-property.adoc[] include::reference/query-methods-collection.adoc[] include::reference/query-methods-map.adoc[] include::reference/query-methods-pojo.adoc[] include::reference/query-methods-id.adoc[] +include::reference/query-methods-combined.adoc[] include::reference/query-methods-modification.adoc[] -include::spring-data-commons-docs/object-mapping.adoc[] include::reference/aerospike-object-mapping.adoc[] +include::reference/aerospike-custom-converters.adoc[] include::reference/template.adoc[] include::reference/secondary-indexes.adoc[] include::reference/indexed-annotation.adoc[] include::reference/caching.adoc[] include::reference/configuration.adoc[] -include::spring-data-commons-docs/dependencies.adoc[] -include::spring-data-commons-docs/auditing.adoc[] :leveloffset: -1 @@ -56,9 +52,8 @@ include::spring-data-commons-docs/auditing.adoc[] :leveloffset: +1 -include::spring-data-commons-docs/repository-namespace-reference.adoc[] -include::spring-data-commons-docs/repository-populator-namespace-reference.adoc[] -include::spring-data-commons-docs/repository-query-keywords-reference.adoc[] -include::spring-data-commons-docs/repository-query-return-types-reference.adoc[] +link:https://docs.spring.io/spring-data/commons/reference/index.html[Spring Data Commons Documentation Reference] + +link:https://docs.spring.io/spring-framework/reference/[Spring Framework Documentation Overview] :leveloffset: -1 diff --git a/src/main/asciidoc/preface.adoc b/src/main/asciidoc/preface.adoc index 0fcfa46ba..f9bd8a8f7 100644 --- a/src/main/asciidoc/preface.adoc +++ b/src/main/asciidoc/preface.adoc @@ -7,11 +7,9 @@ This chapter provides some basic introduction to Spring and Aerospike, it explai [[get-started:first-steps:spring]] == Knowing Spring -Spring Data uses Spring framework's https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/spring-core.html[core] functionality, such as the https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/beans.html[IoC] container, https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/validation.html#core-convert[type conversion system], https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/expressions.html[expression language], https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/jmx.html[JMX integration], and portable https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/dao.html#dao-exceptions[DAO exception hierarchy]. While it is not important to know the Spring APIs, understanding the concepts behind them is. At a minimum, the idea behind IoC should be familiar regardless of IoC container you choose to use. +Spring Data uses Spring framework's https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/spring-core.html[core] functionality, such as the https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/beans.html[IoC] container, https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/validation.html#core-convert[type conversion system], https://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/dao.html#dao-exceptions[DAO exception hierarchy] etc. While it is not important to know the Spring APIs, understanding the concepts behind them is. At a minimum, the idea behind IoC should be familiar regardless of IoC container you choose to use. -The core functionality of the Aerospike support can be used directly, with no need to invoke the IoC services of the Spring Container. This is much like `JdbcTemplate` which can be used 'standalone' without any other services of the Spring container. To leverage all the features of the Spring Data document, such as the repository support, you will need to configure some parts of the library using Spring. - -To learn more about Spring, you can refer to the comprehensive (and sometimes disarming) documentation that explains in detail the Spring Framework. There are a lot of articles, blog entries and books on the matter - take a look at the Spring framework https://docs.spring.io/spring-framework/reference/[documentation reference] for more information. +To learn more about Spring, you can refer to the comprehensive documentation that explains in detail the Spring Framework. There are a lot of articles, blog entries and books on the matter - take a look at the Spring framework https://docs.spring.io/spring-framework/reference/[documentation reference] for more information. [[get-started:first-steps:nosql]] == Knowing NoSQL and Aerospike @@ -28,7 +26,7 @@ The jumping off ground for learning about Aerospike is https://www.aerospike.com Spring Data Aerospike binaries require JDK level 17.0 and above. -In terms of server, it is required to use at least https://www.aerospike.com/download/server/[Aerospike server] version 5.2 (recommended to use the latest version when possible). +In terms of server, it is required to use at least https://www.aerospike.com/download/server/[Aerospike server] version 6.1 (recommended to use the latest version when possible). == Additional Help Resources diff --git a/src/main/asciidoc/reference/aerospike-custom-converters.adoc b/src/main/asciidoc/reference/aerospike-custom-converters.adoc new file mode 100644 index 000000000..81a9a8857 --- /dev/null +++ b/src/main/asciidoc/reference/aerospike-custom-converters.adoc @@ -0,0 +1,55 @@ +[[aerospike.custom-converters]] += Aerospike Custom Converters + +Spring type converters are components used to convert data between different types, particularly when interacting with databases or binding data from external sources. They facilitate seamless transformation of data, such as converting between String and database-specific types (e.g., LocalDate to DATE or String to enumerations). + +For more details, see link:https://docs.spring.io/spring-framework/reference/core/validation/convert.html[Spring Type Conversion]. + +Spring provides a set of default type converters for common conversions. Spring Data Aerospike has its own built-in converters in `DateConverters` and `AerospikeConverters` classes. + +However, in certain cases, custom converters are necessary to handle specific logic or custom serialization requirements. Custom converters allow developers to define precise conversion rules, ensuring data integrity and compatibility between application types and database representations. + +In order to add a custom converter you can leverage Spring's `Converter` SPI to implement type conversion logic and override `customConverters()` method available in `AerospikeDataConfigurationSupport`. Here is an example: + +[source,java] +---- +public class BlockingTestConfig extends AbstractAerospikeDataConfiguration { + + @Override + protected List customConverters() { + return List.of( + CompositeKey.CompositeKeyToStringConverter.INSTANCE, + CompositeKey.StringToCompositeKeyConverter.INSTANCE + ); + } + + @Value + public static class CompositeKey { + + String firstPart; + long secondPart; + + @WritingConverter + public enum CompositeKeyToStringConverter implements Converter { + INSTANCE; + + @Override + public String convert(CompositeKey source) { + return source.firstPart + "::" + source.secondPart; + } + } + + @ReadingConverter + public enum StringToCompositeKeyConverter implements Converter { + INSTANCE; + + @Override + public CompositeKey convert(String source) { + String[] split = source.split("::"); + return new CompositeKey(split[0], Long.parseLong(split[1])); + } + } + } +} +---- + diff --git a/src/main/asciidoc/reference/aerospike-object-mapping.adoc b/src/main/asciidoc/reference/aerospike-object-mapping.adoc index 1142bd7dd..eb890536f 100644 --- a/src/main/asciidoc/reference/aerospike-object-mapping.adoc +++ b/src/main/asciidoc/reference/aerospike-object-mapping.adoc @@ -1,26 +1,30 @@ [[aerospike.object-mapping]] = Aerospike Object Mapping -Rich mapping support is provided by the `AerospikeMappingConverter`. `AerospikeMappingConverter` has a rich metadata model that provides a full feature set of functionality to map domain objects to Aerospike clusters and objects.The mapping metadata model is populated using annotations on your domain objects. However, the infrastructure is not limited to using annotations as the only source of metadata information. The `AerospikeMappingConverter` also allows you to map objects without providing any additional metadata, by following a set of conventions. +Rich mapping support is provided by the `AerospikeMappingConverter` which has a rich metadata model that provides a full feature set of functionality to map domain objects to Aerospike objects. The mapping metadata model is populated using annotations on your domain objects. +However, the infrastructure is not limited to using annotations as the only source of metadata information. +The `AerospikeMappingConverter` also allows you to map objects without providing any additional metadata, by following a set of conventions. In this section, we will describe the features of the `AerospikeMappingConverter`, how to use conventions for mapping objects to documents and how to override those conventions with annotation-based mapping metadata. -For more details refer to SpringData documentation: -<>. +For more details, refer to Spring Data documentation: +link:https://docs.spring.io/spring-data/commons/reference/object-mapping.html[Object Mapping]. [[mapping-conventions]] == Convention Based Mapping -`AerospikeMappingConverter` has a few conventions for mapping objects to documents when no additional mapping metadata is provided. The conventions are: +`AerospikeMappingConverter` has a few conventions for mapping objects to documents when no additional mapping metadata is provided. +The conventions are: [[mapping-conventions-id-field]] === How the 'id' Field Is Handled in the Mapping Layer -Aerospike DB requires that you have an `id` field for all objects. The `id` field can be of any primitive type as well as `String` or `byte[]`. +Aerospike DB requires that you have an `id` field for all objects. +The `id` field can be of any primitive type as well as `String` or `byte[]`. The following table outlines the requirements for the `id` field: -[cols="1,2", options="header"] +[cols="1,2",options="header"] .Examples for the translation of '_id'-field definitions |=== | Field definition @@ -29,20 +33,17 @@ The following table outlines the requirements for the `id` field: | `String` id | A field named 'id' without an annotation -| `@Field` `String` id +| `@Id` `String` myId | A field annotated with `@Id` (`org.springframework.data.annotation.Id`) -| `@Id` `String` customNamedIdField - |=== The following description outlines what type of conversion, if any, will be done on the property mapped to the `id` document field: * By default, the type of the field annotated with `@id` is turned into a `String` to be stored in Aerospike database. If the original type cannot be persisted (see xref:#configuration.keep-original-key-types[keepOriginalKeyTypes] -for details), it must be convertible to `String` and will be stored in the database as such, -then converted back to the original type when the object is read. This is transparent to the application -but needs to be considered if using external tools like `AQL` to view the data. +for details), it must be convertible to `String` and will be stored in the database as such, then converted back to the original type when the object is read. +This is transparent to the application but needs to be considered if using external tools like `AQL` and the Aerospike JDBC Driver to view the data. * If no field named "id" is present in the Java class then an implicit '_id' file will be generated by the driver but not mapped to a property or field of the Java class. When querying and updating `AerospikeTemplate` will use the converter to handle conversions of the `Query` and `Update` objects that correspond to the above rules for saving documents so field names and types used in your queries will be able to match what is in your domain classes. @@ -50,7 +51,8 @@ When querying and updating `AerospikeTemplate` will use the converter to handle [[mapping-configuration]] == Mapping Configuration -Unless explicitly configured, an instance of `AerospikeMappingConverter` is created by default when creating a `AerospikeTemplate`. You can create your own instance of the `MappingAerospikeConverter` so as to tell it where to scan the classpath at the startup of your domain classes in order to extract metadata and construct indexes. +Unless explicitly configured, an instance of `AerospikeMappingConverter` is created by default when creating a `AerospikeTemplate`. +You can create your own instance of the `MappingAerospikeConverter` so as to tell it where to scan the classpath at the startup of your domain classes in order to extract metadata and construct indexes. Also, to have more control over the conversion process (if needed), you can register converters to use for mapping specific classes to and from the database. NOTE: AbstractAerospikeConfiguration will create an AerospikeTemplate instance and register with the container under the name 'AerospikeTemplate'. @@ -58,20 +60,23 @@ NOTE: AbstractAerospikeConfiguration will create an AerospikeTemplate instance a [[mapping-usage-annotations]] === Mapping Annotation Overview -The MappingAerospikeConverter can use metadata to drive the mapping of objects to documents using annotations. An overview of the annotations is provided below +The MappingAerospikeConverter can use metadata to drive the mapping of objects to documents using annotations. +An overview of the annotations is provided below * `@Id` - applied at the field level to mark the field used for identity purposes. * `@Field` - applied at the field level, describes the name of the field as it will be represented in the AerospikeDB BSON document thus allowing the name to be different from the field name of the class. -* `@Version` - applied at the field level to mark record modification count. The value must be effectively integer. +* `@Version` - applied at the field level to mark record modification count. +The value must be effectively integer. In Spring Data Aerospike, documents come in two forms – non-versioned and versioned. Documents with an `@Version` annotation have a version field populated by the corresponding record’s generation count. Version can be passed to a constructor or not (in that case it stays equal to zero). * `@Expiration` - applied at the field level to mark a property to be used as expiration field. -Expiration can be specified in two flavors: as an offset in seconds from the current time (then field value must be -effectively integer) or as an absolute Unix timestamp. Client system time must be synchronized -with Aerospike server system time, otherwise expiration behaviour will be unpredictable. +Expiration can be specified in two flavors: as an offset in seconds from the current time (then field value must be effectively integer) or as an absolute Unix timestamp. +Client system time must be synchronized with Aerospike server system time, otherwise expiration behaviour will be unpredictable. -The mapping metadata infrastructure is defined in a separate spring-data-commons project that is technology-agnostic. Specific subclasses are used in the AerospikeDB support to support annotation-based metadata. Other strategies are also possible to put in place if there is demand. +The mapping metadata infrastructure is defined in a separate spring-data-commons project that is technology-agnostic. +Specific subclasses are used in the AerospikeDB support to support annotation-based metadata. +Other strategies are also possible to put in place if there is demand. Here is an example of a more complex mapping. diff --git a/src/main/asciidoc/reference/aerospike-projections.adoc b/src/main/asciidoc/reference/aerospike-projections.adoc index 7a0f3339c..1dde17303 100644 --- a/src/main/asciidoc/reference/aerospike-projections.adoc +++ b/src/main/asciidoc/reference/aerospike-projections.adoc @@ -3,7 +3,7 @@ Spring Data Aerospike supports Projections, a mechanism that allows you to fetch only relevant fields from Aerospike for a particular use case. This results in better performance, less network traffic, and a better understanding of what is required for the rest of the flow. -For more details refer to SpringData documentation: <>. +For more details, refer to Spring Data documentation: link:https://docs.spring.io/spring-data/rest/reference/data-commons/repositories/projections.html[Projections]. For example, consider a Person class: diff --git a/src/main/asciidoc/reference/aerospike-repositories.adoc b/src/main/asciidoc/reference/aerospike-repositories.adoc index 4f0574951..554125ba9 100644 --- a/src/main/asciidoc/reference/aerospike-repositories.adoc +++ b/src/main/asciidoc/reference/aerospike-repositories.adoc @@ -9,7 +9,7 @@ One of the main goals of the Spring Data is to significantly reduce the amount o One of the core interfaces of Spring Data is `Repository`. This interface acts primarily to capture the types to work with and to help user to discover interfaces that extend Repository. -In other words, it allows user to have basic and complicated queries without writing the implementation. This builds on the <>, so make sure you've got a sound understanding of this concept. +In other words, it allows user to have basic and complicated queries without writing the implementation. This builds on the link:https://docs.spring.io/spring-data/rest/reference/data-commons/repositories.html[Core Spring Data Repository Support], so make sure you've got a sound understanding of this concept. [[aerospike-repo-usage]] == Usage diff --git a/src/main/asciidoc/reference/aerospike.adoc b/src/main/asciidoc/reference/aerospike.adoc deleted file mode 100644 index e69de29bb..000000000 diff --git a/src/main/asciidoc/reference/configuration.adoc b/src/main/asciidoc/reference/configuration.adoc index 00e29bcb0..ccdccac90 100644 --- a/src/main/asciidoc/reference/configuration.adoc +++ b/src/main/asciidoc/reference/configuration.adoc @@ -1,7 +1,7 @@ [[configuration]] = Configuration -Configuration parameters can be set in a standard `application.properties` file using `spring-data-aerospike.*` prefix +Configuration parameters can be set in a standard `application.properties` file using `spring.data.aerospike*` prefix or by overriding configuration from `AbstractAerospikeDataConfiguration` class. [[configuration.application-properties]] @@ -12,16 +12,16 @@ Here is an example: [source,properties] ---- # application.properties -spring-data-aerospike.connection.hosts=localhost:3000 -spring-data-aerospike.connection.namespace=test -spring-data-aerospike.data.scans-enabled=false -spring-data-aerospike.data.send-key=true -spring-data-aerospike.data.create-indexes-on-startup=true -spring-data-aerospike.data.index-cache-refresh-seconds=3600 -spring-data-aerospike.data.server-version-refresh-seconds=3600 -spring-data-aerospike.data.query-max-records=10000 -spring-data-aerospike.data.batch-write-size=100 -spring-data-aerospike.data.keep-original-key-types=false +spring.aerospike.hosts=localhost:3000 +spring.data.aerospike.namespace=test +spring.data.aerospike.scans-enabled=false +spring.data.aerospike.send-key=true +spring-data-aerospike.create-indexes-on-startup=true +spring.data.aerospike.index-cache-refresh-seconds=3600 +spring.data.aerospike.server-version-refresh-seconds=3600 +spring.data.aerospike.query-max-records=10000 +spring.data.aerospike.batch-write-size=100 +spring.data.aerospike.keep-original-key-types=false ---- Configuration class: @@ -60,7 +60,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { } @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setScansEnabled(false); aerospikeDataSettings.setCreateIndexesOnStartup(true); aerospikeDataSettings.setIndexCacheRefreshSeconds(3600); @@ -85,7 +85,7 @@ set via application.properties. [source,properties] ---- # application.properties -spring-data-aerospike.connection.hosts=hostname1:3001, hostname2:tlsName2:3002 +spring.aerospike.hosts=hostname1:3001, hostname2:tlsName2:3002 ---- A String of hosts separated by `,` in form of `hostname1[:tlsName1][:port1],...` @@ -125,7 +125,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { [source,properties] ---- # application.properties -spring-data-aerospike.connection.namespace=test +spring.data.aerospike.namespace=test ---- Aerospike DB namespace. @@ -160,7 +160,7 @@ for implementation details. [source,properties] ---- # application.properties -spring-data-aerospike.data.scans-enabled=false +spring.data.aerospike.scans-enabled=false ---- A scan can be an expensive operation as all records in the set must be read by the Aerospike server, @@ -178,7 +178,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setScansEnabled(false); } } @@ -195,7 +195,7 @@ in a particular use case. [source,properties] ---- # application.properties -spring-data-aerospike.data.create-indexes-on-startup=true +spring.data.aerospike.create-indexes-on-startup=true ---- Create secondary indexes specified using `@Indexed` annotation on startup. @@ -210,7 +210,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setCreateIndexesOnStartup(true); } } @@ -224,7 +224,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { [source,properties] ---- # application.properties -spring-data-aerospike.data.index-cache-refresh-seconds=3600 +spring.data.aerospike.index-cache-refresh-seconds=3600 ---- Automatically refresh indexes cache every seconds. @@ -239,7 +239,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setIndexCacheRefreshSeconds(3600); } } @@ -253,7 +253,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { [source,properties] ---- # application.properties -spring-data-aerospike.data.server-version-refresh-seconds=3600 +spring.data.aerospike.server-version-refresh-seconds=3600 ---- Automatically refresh cached server version every seconds. @@ -268,7 +268,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setServerVersionRefreshSeconds(3600); } } @@ -282,7 +282,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { [source,properties] ---- # application.properties -spring-data-aerospike.data.query-max-records=10000 +spring.data.aerospike.query-max-records=10000 ---- Limit amount of results returned by server. Non-positive value means no limit. @@ -297,7 +297,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setQueryMaxRecords(10000L); } } @@ -311,7 +311,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { [source,properties] ---- # application.properties -spring-data-aerospike.data.batch-write-size=100 +spring.data.aerospike.batch-write-size=100 ---- Maximum batch size for batch write operations. Non-positive value means no limit. @@ -326,7 +326,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setBatchWriteSize(100); } } @@ -340,7 +340,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { [source,properties] ---- # application.properties -spring-data-aerospike.data.keep-original-key-types=false +spring.data.aerospike.keep-original-key-types=false ---- Define how `@Id` fields (primary keys) and `Map` keys are stored in the Aerospike database: @@ -381,7 +381,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setKeepOriginalKeyTypes(false); } } @@ -395,7 +395,7 @@ class ApplicationConfig extends AbstractAerospikeDataConfiguration { [source,properties] ---- # application.properties -spring-data-aerospike.data.writeSortedMaps=true +spring.data.aerospike.writeSortedMaps=true ---- Define how Maps and POJOs are written: `true` - as sorted maps (`TreeMap`, default), `false` - as unsorted (`HashMap`). @@ -414,7 +414,7 @@ It has precedence over reading from application.properties. Here is an example: class ApplicationConfig extends AbstractAerospikeDataConfiguration { @Override - public void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { + protected void configureDataSettings(AerospikeDataSettings aerospikeDataSettings) { aerospikeDataSettings.setWriteSortedMaps(true); } } diff --git a/src/main/asciidoc/reference/getting-started.adoc b/src/main/asciidoc/reference/getting-started.adoc deleted file mode 100644 index e69de29bb..000000000 diff --git a/src/main/asciidoc/reference/query-methods-combined.adoc b/src/main/asciidoc/reference/query-methods-combined.adoc new file mode 100644 index 000000000..f00a8869b --- /dev/null +++ b/src/main/asciidoc/reference/query-methods-combined.adoc @@ -0,0 +1,53 @@ +[[aerospike.query-methods-combined]] += Combined Query Methods + +In Spring Data, complex query methods using `And` or `Or` conjunction allow developers to define custom database queries based on method names that combine multiple conditions. These methods leverage query derivation, enabling developers to create expressive and type-safe queries by simply defining method signatures. + +For more details, see link:https://docs.spring.io/spring-data/commons/reference/repositories/query-methods-details.html[Defining Query Methods]. + +For instance, a method like `findByFirstNameAndLastName` will fetch records matching both conditions, while `findByFirstNameOrLastName` will return records that match either condition. These query methods simplify database interaction by reducing boilerplate code and relying on convention over configuration for readability and maintainability. + +In Spring Data Aerospike you define such queries by adding query methods signatures to a `Repository`, as you would typically, wrapping each query parameter with `QueryParam.of()` method. This method is required to pass arguments to each part of a combined query, it can receive one or more objects of the same type. + +This way `QueryParam` stores arguments passed to each part of a combined repository query, e.g., `repository.findByNameAndEmail(QueryParam.of("John"), QueryParam.of("email"))`. + +Here are some examples: + + +[source,java] +---- +public interface CustomerRepository extends AerospikeRepository { + + // simple query + List findByLastName(String lastName); + + // simple query + List findByFirstName(String firstName); + + // combined query with AND conjunction + List findByEmailAndFirstName(QueryParam email, QueryParam firstName); + + // combined query with AND conjunctions + List findByIdAndFirstNameAndAge(QueryParam id, QueryParam firstName, QueryParam age); + + // combined query with OR conjunction + List findByFirstNameOrAge(QueryParam firstName, QueryParam age); + + // combined query with AND and OR conjunctions + List findByEmailAndFirstNameOrAge(QueryParam email, QueryParam firstName, QueryParam age); +} + + @Test + void findByCombinedQuery() { + QueryParam email = QueryParam.of(dave.getEmail()); + QueryParam name = QueryParam.of(carter.getFirstName()); + List customers = repository.findByEmailAndFirstName(email, name); + assertThat(customers).isEmpty(); + + QueryParam ids = QueryParam.of(List.of(leroi.getId(), dave.getId(), carter.getId())); + QueryParam firstName = QueryParam.of(leroi.getFirstName()); + QueryParam age = QueryParam.of(leroi.getAge()); + List customers2 = repository.findByIdAndFirstNameAndAge(ids, firstName, age); + assertThat(customers).containsOnly(leroi); + } +---- \ No newline at end of file diff --git a/src/main/asciidoc/reference/query-methods-preface.adoc b/src/main/asciidoc/reference/query-methods-preface.adoc index a40203d91..cd5a069cb 100644 --- a/src/main/asciidoc/reference/query-methods-preface.adoc +++ b/src/main/asciidoc/reference/query-methods-preface.adoc @@ -1,3 +1,4 @@ +[[aerospike.query-methods-preface]] = Query Methods Spring Data Aerospike supports defining queries by method name in the Repository interface so that the implementation is generated. @@ -6,7 +7,7 @@ The format of method names is fairly flexible, comprising a verb and criteria. Some of the verbs include `find`, `query`, `read`, `get`, `count` and `delete`. For example, `findByFirstName`, `countByLastName` etc. -For more details refer to basic SpringData documentation: <>. +For more details, refer to basic Spring Data documentation: link:https://docs.spring.io/spring-data/rest/reference/data-commons/repositories/query-methods-details.html[Defining Query Methods]. == Repository Query Keywords diff --git a/src/main/asciidoc/reference/scan-operation.adoc b/src/main/asciidoc/reference/scan-operation.adoc index 1e2681621..0bf684972 100644 --- a/src/main/asciidoc/reference/scan-operation.adoc +++ b/src/main/asciidoc/reference/scan-operation.adoc @@ -13,7 +13,7 @@ xref:#configuration.scans-enabled[`scansEnabled`] parameter to `true`. [source,properties] ---- -spring-data-aerospike.scans-enabled=true +spring.data.aerospike.scans-enabled=true ---- NOTE: Once this flag is enabled, scans run whenever needed with no warnings. diff --git a/src/main/asciidoc/reference/secondary-indexes.adoc b/src/main/asciidoc/reference/secondary-indexes.adoc index 3fb93b767..4ea354b55 100644 --- a/src/main/asciidoc/reference/secondary-indexes.adoc +++ b/src/main/asciidoc/reference/secondary-indexes.adoc @@ -12,7 +12,7 @@ Let's consider a simple query for finding by equality: [source,java] ---- -public List personRepsitory.findByLastName(lastName); +public List personRepository.findByLastName(lastName); ---- Notice that findByLastName is not a simple lookup by key, but rather finding all records in a set. diff --git a/src/main/asciidoc/reference/template.adoc b/src/main/asciidoc/reference/template.adoc index f283547c5..787d4b6f5 100644 --- a/src/main/asciidoc/reference/template.adoc +++ b/src/main/asciidoc/reference/template.adoc @@ -25,7 +25,7 @@ protected AerospikeTemplate template; An alternative is to instantiate it yourself, you can see the bean in `AbstractAerospikeDataConfiguration`. -In case if you need to use custom `WritePolicy`, the `persist` operation can be used +In case if you need to use custom `WritePolicy`, the `persist` operation can be used. For CAS updates `save` operation must be used. diff --git a/src/main/asciidoc/reference/transactions.adoc b/src/main/asciidoc/reference/transactions.adoc index e79325897..dd4b4c73a 100644 --- a/src/main/asciidoc/reference/transactions.adoc +++ b/src/main/asciidoc/reference/transactions.adoc @@ -11,6 +11,8 @@ predefined rules and constraints. transaction are not visible to others. . **Durability** guarantees that once a transaction has been committed, its changes are permanent. +For more details, see link:https://docs.spring.io/spring-framework/reference/data-access/transaction.html[Spring Transaction Management]. + == Choosing Transaction Management Model Spring offers two models of transaction management: **declarative** and **programmatic**. When choosing between them, diff --git a/src/main/asciidoc/spring-data-commons-docs/auditing.adoc b/src/main/asciidoc/spring-data-commons-docs/auditing.adoc deleted file mode 100644 index 152558d4a..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/auditing.adoc +++ /dev/null @@ -1,123 +0,0 @@ -[[auditing]] -= Auditing - -[[auditing.basics]] -== Basics -Spring Data provides sophisticated support to transparently keep track of who created or changed an entity and when the change happened.To benefit from that functionality, you have to equip your entity classes with auditing metadata that can be defined either using annotations or by implementing an interface. -Additionally, auditing has to be enabled either through Annotation configuration or XML configuration to register the required infrastructure components. -Please refer to the store-specific section for configuration samples. - -[NOTE] -==== -Applications that only track creation and modification dates are not required do make their entities implement <>. -==== - -[[auditing.annotations]] -=== Annotation-based Auditing Metadata -We provide `@CreatedBy` and `@LastModifiedBy` to capture the user who created or modified the entity as well as `@CreatedDate` and `@LastModifiedDate` to capture when the change happened. - -.An audited entity -==== -[source,java] ----- -class Customer { - - @CreatedBy - private User user; - - @CreatedDate - private Instant createdDate; - - // … further properties omitted -} ----- -==== - -As you can see, the annotations can be applied selectively, depending on which information you want to capture. -The annotations, indicating to capture when changes are made, can be used on properties of type JDK8 date and time types, `long`, `Long`, and legacy Java `Date` and `Calendar`. - -Auditing metadata does not necessarily need to live in the root level entity but can be added to an embedded one (depending on the actual store in use), as shown in the snippet below. - -.Audit metadata in embedded entity -==== -[source,java] ----- -class Customer { - - private AuditMetadata auditingMetadata; - - // … further properties omitted -} - -class AuditMetadata { - - @CreatedBy - private User user; - - @CreatedDate - private Instant createdDate; - -} ----- -==== - -[[auditing.interfaces]] -=== Interface-based Auditing Metadata -In case you do not want to use annotations to define auditing metadata, you can let your domain class implement the `Auditable` interface. It exposes setter methods for all of the auditing properties. - -[[auditing.auditor-aware]] -=== `AuditorAware` - -In case you use either `@CreatedBy` or `@LastModifiedBy`, the auditing infrastructure somehow needs to become aware of the current principal. To do so, we provide an `AuditorAware` SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type `T` defines what type the properties annotated with `@CreatedBy` or `@LastModifiedBy` have to be. - -The following example shows an implementation of the interface that uses Spring Security's `Authentication` object: - -.Implementation of `AuditorAware` based on Spring Security -==== -[source, java] ----- -class SpringSecurityAuditorAware implements AuditorAware { - - @Override - public Optional getCurrentAuditor() { - - return Optional.ofNullable(SecurityContextHolder.getContext()) - .map(SecurityContext::getAuthentication) - .filter(Authentication::isAuthenticated) - .map(Authentication::getPrincipal) - .map(User.class::cast); - } -} ----- -==== - -The implementation accesses the `Authentication` object provided by Spring Security and looks up the custom `UserDetails` instance that you have created in your `UserDetailsService` implementation. We assume here that you are exposing the domain user through the `UserDetails` implementation but that, based on the `Authentication` found, you could also look it up from anywhere. - -[[auditing.reactive-auditor-aware]] -=== `ReactiveAuditorAware` - -When using reactive infrastructure you might want to make use of contextual information to provide `@CreatedBy` or `@LastModifiedBy` information. -We provide an `ReactiveAuditorAware` SPI interface that you have to implement to tell the infrastructure who the current user or system interacting with the application is. The generic type `T` defines what type the properties annotated with `@CreatedBy` or `@LastModifiedBy` have to be. - -The following example shows an implementation of the interface that uses reactive Spring Security's `Authentication` object: - -.Implementation of `ReactiveAuditorAware` based on Spring Security -==== -[source, java] ----- -class SpringSecurityAuditorAware implements ReactiveAuditorAware { - - @Override - public Mono getCurrentAuditor() { - - return ReactiveSecurityContextHolder.getContext() - .map(SecurityContext::getAuthentication) - .filter(Authentication::isAuthenticated) - .map(Authentication::getPrincipal) - .map(User.class::cast); - } -} ----- -==== - -The implementation accesses the `Authentication` object provided by Spring Security and looks up the custom `UserDetails` instance that you have created in your `UserDetailsService` implementation. We assume here that you are exposing the domain user through the `UserDetails` implementation but that, based on the `Authentication` found, you could also look it up from anywhere. diff --git a/src/main/asciidoc/spring-data-commons-docs/custom-conversions.adoc b/src/main/asciidoc/spring-data-commons-docs/custom-conversions.adoc deleted file mode 100644 index 5dac24291..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/custom-conversions.adoc +++ /dev/null @@ -1,44 +0,0 @@ -[[custom-conversions]] -= Custom Conversions - -The following example of a Spring `Converter` implementation converts from a `String` to a custom `Email` value object: - -[source,java,subs="verbatim,attributes"] ----- -@ReadingConverter -public class EmailReadConverter implements Converter { - - public Email convert(String source) { - return Email.valueOf(source); - } -} ----- - -If you write a `Converter` whose source and target type are native types, we cannot determine whether we should consider it as a reading or a writing converter. -Registering the converter instance as both might lead to unwanted results. -For example, a `Converter` is ambiguous, although it probably does not make sense to try to convert all `String` instances into `Long` instances when writing. -To let you force the infrastructure to register a converter for only one way, we provide `@ReadingConverter` and `@WritingConverter` annotations to be used in the converter implementation. - -Converters are subject to explicit registration as instances are not picked up from a classpath or container scan to avoid unwanted registration with a conversion service and the side effects resulting from such a registration. Converters are registered with `CustomConversions` as the central facility that allows registration and querying for registered converters based on source- and target type. - -`CustomConversions` ships with a pre-defined set of converter registrations: - -* JSR-310 Converters for conversion between `java.time`, `java.util.Date` and `String` types. - -NOTE: Default converters for local temporal types (e.g. `LocalDateTime` to `java.util.Date`) rely on system-default timezone settings to convert between those types. You can override the default converter, by registering your own converter. - -[[customconversions.converter-disambiguation]] -== Converter Disambiguation - -Generally, we inspect the `Converter` implementations for the source and target types they convert from and to. -Depending on whether one of those is a type the underlying data access API can handle natively, we register the converter instance as a reading or a writing converter. -The following examples show a writing- and a read converter (note the difference is in the order of the qualifiers on `Converter`): - -[source,java] ----- -// Write converter as only the target type is one that can be handled natively -class MyConverter implements Converter { … } - -// Read converter as only the source type is one that can be handled natively -class MyConverter implements Converter { … } ----- diff --git a/src/main/asciidoc/spring-data-commons-docs/dependencies.adoc b/src/main/asciidoc/spring-data-commons-docs/dependencies.adoc deleted file mode 100644 index 4b9004d24..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/dependencies.adoc +++ /dev/null @@ -1,62 +0,0 @@ -[[dependencies]] -= Dependencies - -:releasetrainVersion: 2023.0.2 - -Due to the different inception dates of individual Spring Data modules, most of them carry different major and minor version numbers. The easiest way to find compatible ones is to rely on the Spring Data Release Train BOM that we ship with the compatible versions defined. In a Maven project, you would declare this dependency in the `` section of your POM as follows: - -.Using the Spring Data release train BOM -==== -[source, xml, subs="+attributes"] ----- - - - - org.springframework.data - spring-data-bom - {releasetrainVersion} - import - pom - - - ----- -==== - -[[dependencies.train-names]] -[[dependencies.train-version]] -The train version uses https://calver.org/[calver] with the pattern `YYYY.MINOR.MICRO`. -The version name follows `${calver}` for GA releases and service releases and the following pattern for all other versions: `${calver}-${modifier}`, where `modifier` can be one of the following: - -* `SNAPSHOT`: Current snapshots -* `M1`, `M2`, and so on: Milestones -* `RC1`, `RC2`, and so on: Release candidates - -You can find a working example of using the BOMs in https://github.com/spring-projects/spring-data-examples/tree/main/bom[Spring Data examples repository]. With that in place, you can declare the Spring Data modules you would like to use without a version in the `` block, as follows: - -.Declaring a dependency to a Spring Data module -==== -[source, xml] ----- - - - org.springframework.data - spring-data-jpa - - ----- -==== - -[[dependencies.spring-boot]] -== Dependency Management with Spring Boot -Spring Boot selects a recent version of the Spring Data modules for you. If you still want to upgrade to a newer version, -set the `spring-data-bom.version` property to the <> -you would like to use. - -See Spring Boot's https://docs.spring.io/spring-boot/docs/current/reference/html/dependency-versions.html#appendix.dependency-versions.properties[documentation] -(search for "Spring Data Bom") for more details. - -[[dependencies.spring-framework]] -== Spring Framework - -Using the most recent version of SpringData is highly recommended. diff --git a/src/main/asciidoc/spring-data-commons-docs/object-mapping.adoc b/src/main/asciidoc/spring-data-commons-docs/object-mapping.adoc deleted file mode 100644 index 6a9ee1e23..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/object-mapping.adoc +++ /dev/null @@ -1,446 +0,0 @@ -[[mapping.fundamentals]] -= Object Mapping Fundamentals - -This section covers the fundamentals of Spring Data object mapping, object creation, field and property access, mutability and immutability. -Note, that this section only applies to Spring Data modules that do not use the object mapping of the underlying data store (like JPA). -Also be sure to consult the store-specific sections for store-specific object mapping, like indexes, customizing column or field names or the like. - -Core responsibility of the Spring Data object mapping is to create instances of domain objects and map the store-native data structures onto those. -This means we need two fundamental steps: - -1. Instance creation by using one of the constructors exposed. -2. Instance population to materialize all exposed properties. - -[[mapping.object-creation]] -== Object creation - -Spring Data automatically tries to detect a persistent entity's constructor to be used to materialize objects of that type. -The resolution algorithm works as follows: - -1. If there is a single static factory method annotated with `@PersistenceCreator` then it is used. -2. If there is a single constructor, it is used. -3. If there are multiple constructors and exactly one is annotated with `@PersistenceCreator`, it is used. -4. If the type is a Java `Record` the canonical constructor is used. -5. If there's a no-argument constructor, it is used. -Other constructors will be ignored. - -The value resolution assumes constructor/factory method argument names to match the property names of the entity, i.e. the resolution will be performed as if the property was to be populated, including all customizations in mapping (different datastore column or field name etc.). -This also requires either parameter names information available in the class file or an `@ConstructorProperties` annotation being present on the constructor. - -The value resolution can be customized by using Spring Framework's `@Value` value annotation using a store-specific SpEL expression. -Please consult the section on store specific mappings for further details. - -[[mapping.object-creation.details]] -.Object creation internals -**** - -To avoid the overhead of reflection, Spring Data object creation uses a factory class generated at runtime by default, which will call the domain classes constructor directly. -I.e. for this example type: - -[source,java] ----- -class Person { - Person(String firstname, String lastname) { … } -} ----- - -we will create a factory class semantically equivalent to this one at runtime: - -[source,java] ----- -class PersonObjectInstantiator implements ObjectInstantiator { - - Object newInstance(Object... args) { - return new Person((String) args[0], (String) args[1]); - } -} ----- - -This gives us a roundabout 10% performance boost over reflection. -For the domain class to be eligible for such optimization, it needs to adhere to a set of constraints: - -- it must not be a private class -- it must not be a non-static inner class -- it must not be a CGLib proxy class -- the constructor to be used by Spring Data must not be private - -If any of these criteria match, Spring Data will fall back to entity instantiation via reflection. -**** - -[[mapping.property-population]] -== Property population - -Once an instance of the entity has been created, Spring Data populates all remaining persistent properties of that class. -Unless already populated by the entity's constructor (i.e. consumed through its constructor argument list), the identifier property will be populated first to allow the resolution of cyclic object references. -After that, all non-transient properties that have not already been populated by the constructor are set on the entity instance. -For that we use the following algorithm: - -1. If the property is immutable but exposes a `with…` method (see below), we use the `with…` method to create a new entity instance with the new property value. -2. If property access (i.e. access through getters and setters) is defined, we're invoking the setter method. -3. If the property is mutable we set the field directly. -4. If the property is immutable we're using the constructor to be used by persistence operations (see <>) to create a copy of the instance. -5. By default, we set the field value directly. - -[[mapping.property-population.details]] -.Property population internals -**** -Similarly to our <> we also use Spring Data runtime generated accessor classes to interact with the entity instance. - -[source,java] ----- -class Person { - - private final Long id; - private String firstname; - private @AccessType(Type.PROPERTY) String lastname; - - Person() { - this.id = null; - } - - Person(Long id, String firstname, String lastname) { - // Field assignments - } - - Person withId(Long id) { - return new Person(id, this.firstname, this.lastame); - } - - void setLastname(String lastname) { - this.lastname = lastname; - } -} ----- - -.A generated Property Accessor -==== -[source,java] ----- -class PersonPropertyAccessor implements PersistentPropertyAccessor { - - private static final MethodHandle firstname; <2> - - private Person person; <1> - - public void setProperty(PersistentProperty property, Object value) { - - String name = property.getName(); - - if ("firstname".equals(name)) { - firstname.invoke(person, (String) value); <2> - } else if ("id".equals(name)) { - this.person = person.withId((Long) value); <3> - } else if ("lastname".equals(name)) { - this.person.setLastname((String) value); <4> - } - } -} ----- - -<1> PropertyAccessor's hold a mutable instance of the underlying object. -This is, to enable mutations of otherwise immutable properties. -<2> By default, Spring Data uses field-access to read and write property values. -As per visibility rules of `private` fields, `MethodHandles` are used to interact with fields. -<3> The class exposes a `withId(…)` method that's used to set the identifier, e.g. when an instance is inserted into the datastore and an identifier has been generated. -Calling `withId(…)` creates a new `Person` object. -All subsequent mutations will take place in the new instance leaving the previous untouched. -<4> Using property-access allows direct method invocations without using `MethodHandles`. -==== - -This gives us a roundabout 25% performance boost over reflection. -For the domain class to be eligible for such optimization, it needs to adhere to a set of constraints: - -- Types must not reside in the default or under the `java` package. -- Types and their constructors must be `public` -- Types that are inner classes must be `static`. -- The used Java Runtime must allow for declaring classes in the originating `ClassLoader`. -Java 9 and newer impose certain limitations. - -By default, Spring Data attempts to use generated property accessors and falls back to reflection-based ones if a limitation is detected. -**** - -Let's have a look at the following entity: - -.A sample entity -==== -[source,java] ----- -class Person { - - private final @Id Long id; <1> - private final String firstname, lastname; <2> - private final LocalDate birthday; - private final int age; <3> - - private String comment; <4> - private @AccessType(Type.PROPERTY) String remarks; <5> - - static Person of(String firstname, String lastname, LocalDate birthday) { <6> - - return new Person(null, firstname, lastname, birthday, - Period.between(birthday, LocalDate.now()).getYears()); - } - - Person(Long id, String firstname, String lastname, LocalDate birthday, int age) { <6> - - this.id = id; - this.firstname = firstname; - this.lastname = lastname; - this.birthday = birthday; - this.age = age; - } - - Person withId(Long id) { <1> - return new Person(id, this.firstname, this.lastname, this.birthday, this.age); - } - - void setRemarks(String remarks) { <5> - this.remarks = remarks; - } -} ----- -==== - -<1> The identifier property is final but set to `null` in the constructor. -The class exposes a `withId(…)` method that's used to set the identifier, e.g. when an instance is inserted into the datastore and an identifier has been generated. -The original `Person` instance stays unchanged as a new one is created. -The same pattern is usually applied for other properties that are store managed but might have to be changed for persistence operations. -The wither method is optional as the persistence constructor (see 6) is effectively a copy constructor and setting the property will be translated into creating a fresh instance with the new identifier value applied. -<2> The `firstname` and `lastname` properties are ordinary immutable properties potentially exposed through getters. -<3> The `age` property is an immutable but derived one from the `birthday` property. -With the design shown, the database value will trump the defaulting as Spring Data uses the only declared constructor. -Even if the intent is that the calculation should be preferred, it's important that this constructor also takes `age` as parameter (to potentially ignore it) as otherwise the property population step will attempt to set the age field and fail due to it being immutable and no `with…` method being present. -<4> The `comment` property is mutable and is populated by setting its field directly. -<5> The `remarks` property is mutable and is populated by invoking the setter method. -<6> The class exposes a factory method and a constructor for object creation. -The core idea here is to use factory methods instead of additional constructors to avoid the need for constructor disambiguation through `@PersistenceCreator`. -Instead, defaulting of properties is handled within the factory method. -If you want Spring Data to use the factory method for object instantiation, annotate it with `@PersistenceCreator`. - -[[mapping.general-recommendations]] -== General recommendations - -* _Try to stick to immutable objects_ -- Immutable objects are straightforward to create as materializing an object is then a matter of calling its constructor only. -Also, this avoids your domain objects to be littered with setter methods that allow client code to manipulate the objects state. -If you need those, prefer to make them package protected so that they can only be invoked by a limited amount of co-located types. -Constructor-only materialization is up to 30% faster than properties population. -* _Provide an all-args constructor_ -- Even if you cannot or don't want to model your entities as immutable values, there's still value in providing a constructor that takes all properties of the entity as arguments, including the mutable ones, as this allows the object mapping to skip the property population for optimal performance. -* _Use factory methods instead of overloaded constructors to avoid ``@PersistenceCreator``_ -- With an all-argument constructor needed for optimal performance, we usually want to expose more application use case specific constructors that omit things like auto-generated identifiers etc. -It's an established pattern to rather use static factory methods to expose these variants of the all-args constructor. -* _Make sure you adhere to the constraints that allow the generated instantiator and property accessor classes to be used_ -- -* _For identifiers to be generated, still use a final field in combination with an all-arguments persistence constructor (preferred) or a `with…` method_ -- -* _Use Lombok to avoid boilerplate code_ -- As persistence operations usually require a constructor taking all arguments, their declaration becomes a tedious repetition of boilerplate parameter to field assignments that can best be avoided by using Lombok's `@AllArgsConstructor`. - -[[mapping.general-recommendations.override.properties]] -=== Overriding Properties - -Java's allows a flexible design of domain classes where a subclass could define a property that is already declared with the same name in its superclass. -Consider the following example: - -==== -[source,java] ----- -public class SuperType { - - private CharSequence field; - - public SuperType(CharSequence field) { - this.field = field; - } - - public CharSequence getField() { - return this.field; - } - - public void setField(CharSequence field) { - this.field = field; - } -} - -public class SubType extends SuperType { - - private String field; - - public SubType(String field) { - super(field); - this.field = field; - } - - @Override - public String getField() { - return this.field; - } - - public void setField(String field) { - this.field = field; - - // optional - super.setField(field); - } -} ----- -==== - -Both classes define a `field` using assignable types. `SubType` however shadows `SuperType.field`. -Depending on the class design, using the constructor could be the only default approach to set `SuperType.field`. -Alternatively, calling `super.setField(…)` in the setter could set the `field` in `SuperType`. -All these mechanisms create conflicts to some degree because the properties share the same name yet might represent two distinct values. -Spring Data skips super-type properties if types are not assignable. -That is, the type of the overridden property must be assignable to its super-type property type to be registered as override, otherwise the super-type property is considered transient. -We generally recommend using distinct property names. - -Spring Data modules generally support overridden properties holding different values. -From a programming model perspective there are a few things to consider: - -1. Which property should be persisted (default to all declared properties)? -You can exclude properties by annotating these with `@Transient`. -2. How to represent properties in your data store? -Using the same field/column name for different values typically leads to corrupt data so you should annotate least one of the properties using an explicit field/column name. -3. Using `@AccessType(PROPERTY)` cannot be used as the super-property cannot be generally set without making any further assumptions of the setter implementation. - -[[mapping.kotlin]] -== Kotlin support - -Spring Data adapts specifics of Kotlin to allow object creation and mutation. - -[[mapping.kotlin.creation]] -=== Kotlin object creation - -Kotlin classes are supported to be instantiated, all classes are immutable by default and require explicit property declarations to define mutable properties. - -Spring Data automatically tries to detect a persistent entity's constructor to be used to materialize objects of that type. -The resolution algorithm works as follows: - -1. If there is a constructor that is annotated with `@PersistenceCreator`, it is used. -2. If the type is a <> the primary constructor is used. -3. If there is a single static factory method annotated with `@PersistenceCreator` then it is used. -4. If there is a single constructor, it is used. -5. If there are multiple constructors and exactly one is annotated with `@PersistenceCreator`, it is used. -6. If the type is a Java `Record` the canonical constructor is used. -7. If there's a no-argument constructor, it is used. -Other constructors will be ignored. - -Consider the following `data` class `Person`: - -==== -[source,kotlin] ----- -data class Person(val id: String, val name: String) ----- -==== - -The class above compiles to a typical class with an explicit constructor.We can customize this class by adding another constructor and annotate it with `@PersistenceCreator` to indicate a constructor preference: - -==== -[source,kotlin] ----- -data class Person(var id: String, val name: String) { - - @PersistenceCreator - constructor(id: String) : this(id, "unknown") -} ----- -==== - -Kotlin supports parameter optionality by allowing default values to be used if a parameter is not provided. -When Spring Data detects a constructor with parameter defaulting, then it leaves these parameters absent if the data store does not provide a value (or simply returns `null`) so Kotlin can apply parameter defaulting.Consider the following class that applies parameter defaulting for `name` - -==== -[source,kotlin] ----- -data class Person(var id: String, val name: String = "unknown") ----- -==== - -Every time the `name` parameter is either not part of the result or its value is `null`, then the `name` defaults to `unknown`. - -=== Property population of Kotlin data classes - -In Kotlin, all classes are immutable by default and require explicit property declarations to define mutable properties. -Consider the following `data` class `Person`: - -==== -[source,kotlin] ----- -data class Person(val id: String, val name: String) ----- -==== - -This class is effectively immutable. -It allows creating new instances as Kotlin generates a `copy(…)` method that creates new object instances copying all property values from the existing object and applying property values provided as arguments to the method. - -[[mapping.kotlin.override.properties]] -=== Kotlin Overriding Properties - -Kotlin allows declaring https://kotlinlang.org/docs/inheritance.html#overriding-properties[property overrides] to alter properties in subclasses. - -==== -[source,kotlin] ----- -open class SuperType(open var field: Int) - -class SubType(override var field: Int = 1) : - SuperType(field) { -} ----- -==== - -Such an arrangement renders two properties with the name `field`. -Kotlin generates property accessors (getters and setters) for each property in each class. -Effectively, the code looks like as follows: - -==== -[source,java] ----- -public class SuperType { - - private int field; - - public SuperType(int field) { - this.field = field; - } - - public int getField() { - return this.field; - } - - public void setField(int field) { - this.field = field; - } -} - -public final class SubType extends SuperType { - - private int field; - - public SubType(int field) { - super(field); - this.field = field; - } - - public int getField() { - return this.field; - } - - public void setField(int field) { - this.field = field; - } -} ----- -==== - -Getters and setters on `SubType` set only `SubType.field` and not `SuperType.field`. -In such an arrangement, using the constructor is the only default approach to set `SuperType.field`. -Adding a method to `SubType` to set `SuperType.field` via `this.SuperType.field = …` is possible but falls outside supported conventions. -Property overrides create conflicts to some degree because the properties share the same name yet might represent two distinct values. -We generally recommend using distinct property names. - -Spring Data modules generally support overridden properties holding different values. -From a programming model perspective there are a few things to consider: - -1. Which property should be persisted (default to all declared properties)? -You can exclude properties by annotating these with `@Transient`. -2. How to represent properties in your data store? -Using the same field/column name for different values typically leads to corrupt data so you should annotate least one of the properties using an explicit field/column name. -3. Using `@AccessType(PROPERTY)` cannot be used as the super-property cannot be set. - diff --git a/src/main/asciidoc/spring-data-commons-docs/query-by-example.adoc b/src/main/asciidoc/spring-data-commons-docs/query-by-example.adoc deleted file mode 100644 index 192055cc6..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/query-by-example.adoc +++ /dev/null @@ -1,218 +0,0 @@ -[[query-by-example]] -= Query by Example - -[[query-by-example.introduction]] -== Introduction - -This chapter provides an introduction to Query by Example and explains how to use it. - -Query by Example (QBE) is a user-friendly querying technique with a simple interface. -It allows dynamic query creation and does not require you to write queries that contain field names. -In fact, Query by Example does not require you to write queries by using store-specific query languages at all. - -[[query-by-example.usage]] -== Usage - -The Query by Example API consists of four parts: - -* Probe: The actual example of a domain object with populated fields. -* `ExampleMatcher`: The `ExampleMatcher` carries details on how to match particular fields. -It can be reused across multiple Examples. -* `Example`: An `Example` consists of the probe and the `ExampleMatcher`. -It is used to create the query. -* `FetchableFluentQuery`: A `FetchableFluentQuery` offers a fluent API, that allows further customization of a query derived from an `Example`. - Using the fluent API lets you to specify ordering projection and result processing for your query. - -Query by Example is well suited for several use cases: - -* Querying your data store with a set of static or dynamic constraints. -* Frequent refactoring of the domain objects without worrying about breaking existing queries. -* Working independently from the underlying data store API. - -Query by Example also has several limitations: - -* No support for nested or grouped property constraints, such as `firstname = ?0 or (firstname = ?1 and lastname = ?2)`. -* Only supports starts/contains/ends/regex matching for strings and exact matching for other property types. - -Before getting started with Query by Example, you need to have a domain object. -To get started, create an interface for your repository, as shown in the following example: - -.Sample Person object -==== -[source,java] ----- -public class Person { - - @Id - private String id; - private String firstname; - private String lastname; - private Address address; - - // … getters and setters omitted -} ----- -==== - -The preceding example shows a simple domain object. -You can use it to create an `Example`. -By default, fields having `null` values are ignored, and strings are matched by using the store specific defaults. - -NOTE: Inclusion of properties into a Query by Example criteria is based on nullability. -Properties using primitive types (`int`, `double`, …) are always included unless the <>. - -Examples can be built by either using the `of` factory method or by using <>. `Example` is immutable. -The following listing shows a simple Example: - -.Simple Example -==== -[source,java] ----- -Person person = new Person(); <1> -person.setFirstname("Dave"); <2> - -Example example = Example.of(person); <3> ----- - -<1> Create a new instance of the domain object. -<2> Set the properties to query. -<3> Create the `Example`. -==== - -You can run the example queries by using repositories. -To do so, let your repository interface extend `QueryByExampleExecutor`. -The following listing shows an excerpt from the `QueryByExampleExecutor` interface: - -.The `QueryByExampleExecutor` -==== -[source,java] ----- -public interface QueryByExampleExecutor { - - S findOne(Example example); - - Iterable findAll(Example example); - - // … more functionality omitted. -} ----- -==== - -[[query-by-example.matchers]] -== Example Matchers - -Examples are not limited to default settings. -You can specify your own defaults for string matching, null handling, and property-specific settings by using the `ExampleMatcher`, as shown in the following example: - -.Example matcher with customized matching -==== -[source,java] ----- -Person person = new Person(); <1> -person.setFirstname("Dave"); <2> - -ExampleMatcher matcher = ExampleMatcher.matching() <3> - .withIgnorePaths("lastname") <4> - .withIncludeNullValues() <5> - .withStringMatcher(StringMatcher.ENDING); <6> - -Example example = Example.of(person, matcher); <7> - ----- - -<1> Create a new instance of the domain object. -<2> Set properties. -<3> Create an `ExampleMatcher` to expect all values to match. -It is usable at this stage even without further configuration. -<4> Construct a new `ExampleMatcher` to ignore the `lastname` property path. -<5> Construct a new `ExampleMatcher` to ignore the `lastname` property path and to include null values. -<6> Construct a new `ExampleMatcher` to ignore the `lastname` property path, to include null values, and to perform suffix string matching. -<7> Create a new `Example` based on the domain object and the configured `ExampleMatcher`. -==== - -By default, the `ExampleMatcher` expects all values set on the probe to match. -If you want to get results matching any of the predicates defined implicitly, use `ExampleMatcher.matchingAny()`. - -You can specify behavior for individual properties (such as "firstname" and "lastname" or, for nested properties, "address.city"). -You can tune it with matching options and case sensitivity, as shown in the following example: - -.Configuring matcher options -==== -[source,java] ----- -ExampleMatcher matcher = ExampleMatcher.matching() - .withMatcher("firstname", endsWith()) - .withMatcher("lastname", startsWith().ignoreCase()); -} ----- -==== - -Another way to configure matcher options is to use lambdas (introduced in Java 8). -This approach creates a callback that asks the implementor to modify the matcher. -You need not return the matcher, because configuration options are held within the matcher instance. -The following example shows a matcher that uses lambdas: - -.Configuring matcher options with lambdas -==== -[source,java] ----- -ExampleMatcher matcher = ExampleMatcher.matching() - .withMatcher("firstname", match -> match.endsWith()) - .withMatcher("firstname", match -> match.startsWith()); -} ----- -==== - -Queries created by `Example` use a merged view of the configuration. -Default matching settings can be set at the `ExampleMatcher` level, while individual settings can be applied to particular property paths. -Settings that are set on `ExampleMatcher` are inherited by property path settings unless they are defined explicitly. -Settings on a property patch have higher precedence than default settings. -The following table describes the scope of the various `ExampleMatcher` settings: - -[cols="1,2",options="header"] -.Scope of `ExampleMatcher` settings -|=== -| Setting -| Scope - -| Null-handling -| `ExampleMatcher` - -| String matching -| `ExampleMatcher` and property path - -| Ignoring properties -| Property path - -| Case sensitivity -| `ExampleMatcher` and property path - -| Value transformation -| Property path - -|=== - -[[query-by-example.fluent]] -== Fluent API - -`QueryByExampleExecutor` offers one more method, which we did not mention so far: ` R findBy(Example example, Function, R> queryFunction)`. -As with other methods, it executes a query derived from an `Example`. -However, with the second argument, you can control aspects of that execution that you cannot dynamically control otherwise. -You do so by invoking the various methods of the `FetchableFluentQuery` in the second argument. -`sortBy` lets you specify an ordering for your result. -`as` lets you specify the type to which you want the result to be transformed. -`project` limits the queried attributes. -`first`, `firstValue`, `one`, `oneValue`, `all`, `page`, `stream`, `count`, and `exists` define what kind of result you get and how the query behaves when more than the expected number of results are available. - - -.Use the fluent API to get the last of potentially many results, ordered by lastname. -==== -[source,java] ----- -Optional match = repository.findBy(example, - q -> q - .sortBy(Sort.by("lastname").descending()) - .first() -); ----- -==== diff --git a/src/main/asciidoc/spring-data-commons-docs/repositories-null-handling.adoc b/src/main/asciidoc/spring-data-commons-docs/repositories-null-handling.adoc deleted file mode 100644 index 43bb79b80..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repositories-null-handling.adoc +++ /dev/null @@ -1,100 +0,0 @@ -:spring-framework-docs: https://docs.spring.io/spring-framework/docs/5.3.x/reference/html -:spring-framework-javadoc: https://docs.spring.io/spring-framework/docs/current/javadoc-api - -[[repositories.nullability]] -=== Null Handling of Repository Methods - -As of Spring Data 2.0, repository CRUD methods that return an individual aggregate instance use Java 8's `Optional` to indicate the potential absence of a value. -Besides that, Spring Data supports returning the following wrapper types on query methods: - -* `com.google.common.base.Optional` -* `scala.Option` -* `io.vavr.control.Option` - -Alternatively, query methods can choose not to use a wrapper type at all. -The absence of a query result is then indicated by returning `null`. -Repository methods returning collections, collection alternatives, wrappers, and streams are guaranteed never to return `null` but rather the corresponding empty representation. -See "`<>`" for details. - -[[repositories.nullability.annotations]] -==== Nullability Annotations - -You can express nullability constraints for repository methods by using {spring-framework-docs}/core.html#null-safety[Spring Framework's nullability annotations]. -They provide a tooling-friendly approach and opt-in `null` checks during runtime, as follows: - -* {spring-framework-javadoc}/org/springframework/lang/NonNullApi.html[`@NonNullApi`]: Used on the package level to declare that the default behavior for parameters and return values is, respectively, neither to accept nor to produce `null` values. -* {spring-framework-javadoc}/org/springframework/lang/NonNull.html[`@NonNull`]: Used on a parameter or return value that must not be `null` (not needed on a parameter and return value where `@NonNullApi` applies). -* {spring-framework-javadoc}/org/springframework/lang/Nullable.html[`@Nullable`]: Used on a parameter or return value that can be `null`. - -Spring annotations are meta-annotated with https://jcp.org/en/jsr/detail?id=305[JSR 305] annotations (a dormant but widely used JSR). -JSR 305 meta-annotations let tooling vendors (such as https://www.jetbrains.com/help/idea/nullable-and-notnull-annotations.html[IDEA], https://help.eclipse.org/latest/index.jsp?topic=/org.eclipse.jdt.doc.user/tasks/task-using_external_null_annotations.htm[Eclipse], and link:https://kotlinlang.org/docs/reference/java-interop.html#null-safety-and-platform-types[Kotlin]) provide null-safety support in a generic way, without having to hard-code support for Spring annotations. -To enable runtime checking of nullability constraints for query methods, you need to activate non-nullability on the package level by using Spring’s `@NonNullApi` in `package-info.java`, as shown in the following example: - -.Declaring Non-nullability in `package-info.java` -==== -[source,java] ----- -@org.springframework.lang.NonNullApi -package com.acme; ----- -==== - -Once non-null defaulting is in place, repository query method invocations get validated at runtime for nullability constraints. -If a query result violates the defined constraint, an exception is thrown. -This happens when the method would return `null` but is declared as non-nullable (the default with the annotation defined on the package in which the repository resides). -If you want to opt-in to nullable results again, selectively use `@Nullable` on individual methods. -Using the result wrapper types mentioned at the start of this section continues to work as expected: an empty result is translated into the value that represents absence. - -The following example shows a number of the techniques just described: - -.Using different nullability constraints -==== -[source,java] ----- -package com.acme; <1> - -import org.springframework.lang.Nullable; - -interface UserRepository extends Repository { - - User getByEmailAddress(EmailAddress emailAddress); <2> - - @Nullable - User findByEmailAddress(@Nullable EmailAddress emailAdress); <3> - - Optional findOptionalByEmailAddress(EmailAddress emailAddress); <4> -} ----- -<1> The repository resides in a package (or sub-package) for which we have defined non-null behavior. -<2> Throws an `EmptyResultDataAccessException` when the query does not produce a result. -Throws an `IllegalArgumentException` when the `emailAddress` handed to the method is `null`. -<3> Returns `null` when the query does not produce a result. -Also accepts `null` as the value for `emailAddress`. -<4> Returns `Optional.empty()` when the query does not produce a result. -Throws an `IllegalArgumentException` when the `emailAddress` handed to the method is `null`. -==== - -[[repositories.nullability.kotlin]] -==== Nullability in Kotlin-based Repositories - -Kotlin has the definition of https://kotlinlang.org/docs/reference/null-safety.html[nullability constraints] baked into the language. -Kotlin code compiles to bytecode, which does not express nullability constraints through method signatures but rather through compiled-in metadata. -Make sure to include the `kotlin-reflect` JAR in your project to enable introspection of Kotlin's nullability constraints. -Spring Data repositories use the language mechanism to define those constraints to apply the same runtime checks, as follows: - -.Using nullability constraints on Kotlin repositories -==== -[source,kotlin] ----- -interface UserRepository : Repository { - - fun findByUsername(username: String): User <1> - - fun findByFirstname(firstname: String?): User? <2> -} ----- -<1> The method defines both the parameter and the result as non-nullable (the Kotlin default). -The Kotlin compiler rejects method invocations that pass `null` to the method. -If the query yields an empty result, an `EmptyResultDataAccessException` is thrown. -<2> This method accepts `null` for the `firstname` parameter and returns `null` if the query does not produce a result. -==== diff --git a/src/main/asciidoc/spring-data-commons-docs/repositories-paging-sorting.adoc b/src/main/asciidoc/spring-data-commons-docs/repositories-paging-sorting.adoc deleted file mode 100644 index 58b94fe86..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repositories-paging-sorting.adoc +++ /dev/null @@ -1,206 +0,0 @@ -[[repositories.special-parameters]] -=== Paging, Iterating Large Results, Sorting - -To handle parameters in your query, define method parameters as already seen in the preceding examples. -Besides that, the infrastructure recognizes certain specific types like `Pageable` and `Sort`, to apply pagination and sorting to your queries dynamically. -The following example demonstrates these features: - -ifdef::feature-scroll[] -.Using `Pageable`, `Slice`, `ScrollPosition`, and `Sort` in query methods -==== -[source,java] ----- -Page findByLastname(String lastname, Pageable pageable); - -Slice findByLastname(String lastname, Pageable pageable); - -Window findTop10ByLastname(String lastname, ScrollPosition position, Sort sort); - -List findByLastname(String lastname, Sort sort); - -List findByLastname(String lastname, Pageable pageable); ----- -==== -endif::[] - -ifndef::feature-scroll[] -.Using `Pageable`, `Slice`, and `Sort` in query methods -==== -[source,java] ----- -Page findByLastname(String lastname, Pageable pageable); - -Slice findByLastname(String lastname, Pageable pageable); - -List findByLastname(String lastname, Sort sort); - -List findByLastname(String lastname, Pageable pageable); ----- -==== -endif::[] - -IMPORTANT: APIs taking `Sort` and `Pageable` expect non-`null` values to be handed into methods. -If you do not want to apply any sorting or pagination, use `Sort.unsorted()` and `Pageable.unpaged()`. - -The first method lets you pass an `org.springframework.data.domain.Pageable` instance to the query method to dynamically add paging to your statically defined query. -A `Page` knows about the total number of elements and pages available. -It does so by the infrastructure triggering a count query to calculate the overall number. -As this might be expensive (depending on the store used), you can instead return a `Slice`. -A `Slice` knows only about whether a next `Slice` is available, which might be sufficient when walking through a larger result set. - -Sorting options are handled through the `Pageable` instance, too. -If you need only sorting, add an `org.springframework.data.domain.Sort` parameter to your method. -As you can see, returning a `List` is also possible. -In this case, the additional metadata required to build the actual `Page` instance is not created (which, in turn, means that the additional count query that would have been necessary is not issued). -Rather, it restricts the query to look up only the given range of entities. - -NOTE: To find out how many pages you get for an entire query, you have to trigger an additional count query. -By default, this query is derived from the query you actually trigger. - -[[repositories.scrolling.guidance]] -==== Which Method is Appropriate? - -The value provided by the Spring Data abstractions is perhaps best shown by the possible query method return types outlined in the following table below. -The table shows which types you can return from a query method - -.Consuming Large Query Results -[cols="1,2,2,3"] -|=== -| Method|Amount of Data Fetched|Query Structure|Constraints - -| <`>> -| All results. -| Single query. -| Query results can exhaust all memory. Fetching all data can be time-intensive. - -| <`>> -| All results. -| Single query. -| Query results can exhaust all memory. Fetching all data can be time-intensive. - -| <`>> -| Chunked (one-by-one or in batches) depending on `Stream` consumption. -| Single query using typically cursors. -| Streams must be closed after usage to avoid resource leaks. - -| `Flux` -| Chunked (one-by-one or in batches) depending on `Flux` consumption. -| Single query using typically cursors. -| Store module must provide reactive infrastructure. - -| `Slice` -| `Pageable.getPageSize() + 1` at `Pageable.getOffset()` -| One to many queries fetching data starting at `Pageable.getOffset()` applying limiting. -a| A `Slice` can only navigate to the next `Slice`. - -* `Slice` provides details whether there is more data to fetch. -* Offset-based queries becomes inefficient when the offset is too large because the database still has to materialize the full result. - -ifdef::feature-scroll[] -| Offset-based `Window` -| `limit + 1` at `OffsetScrollPosition.getOffset()` -| One to many queries fetching data starting at `OffsetScrollPosition.getOffset()` applying limiting. -a| A `Window` can only navigate to the next `Window`. -endif::[] - -* `Window` provides details whether there is more data to fetch. -* Offset-based queries becomes inefficient when the offset is too large because the database still has to materialize the full result. - -| `Page` -| `Pageable.getPageSize()` at `Pageable.getOffset()` -| One to many queries starting at `Pageable.getOffset()` applying limiting. Additionally, `COUNT(…)` query to determine the total number of elements can be required. -a| Often times, `COUNT(…)` queries are required that are costly. - -* Offset-based queries becomes inefficient when the offset is too large because the database still has to materialize the full result. - -ifdef::feature-scroll[] -| Keyset-based `Window` -| `limit + 1` using a rewritten `WHERE` condition -| One to many queries fetching data starting at `KeysetScrollPosition.getKeys()` applying limiting. -a| A `Window` can only navigate to the next `Window`. - -* `Window` provides details whether there is more data to fetch. -* Keyset-based queries require a proper index structure for efficient querying. -* Most data stores do not work well when Keyset-based query results contain `null` values. -* Results must expose all sorting keys in their results requiring projections to select potentially more properties than required for the actual projection. -endif::[] - -|=== - -[[repositories.paging-and-sorting]] -==== Paging and Sorting - -You can define simple sorting expressions by using property names. -You can concatenate expressions to collect multiple criteria into one expression. - -.Defining sort expressions -==== -[source,java] ----- -Sort sort = Sort.by("firstname").ascending() - .and(Sort.by("lastname").descending()); ----- -==== - -For a more type-safe way to define sort expressions, start with the type for which to define the sort expression and use method references to define the properties on which to sort. - -.Defining sort expressions by using the type-safe API -==== -[source,java] ----- -TypedSort person = Sort.sort(Person.class); - -Sort sort = person.by(Person::getFirstname).ascending() - .and(person.by(Person::getLastname).descending()); ----- -==== - -NOTE: `TypedSort.by(…)` makes use of runtime proxies by (typically) using CGlib, which may interfere with native image compilation when using tools such as Graal VM Native. - -If your store implementation supports Querydsl, you can also use the generated metamodel types to define sort expressions: - -.Defining sort expressions by using the Querydsl API -==== -[source,java] ----- -QSort sort = QSort.by(QPerson.firstname.asc()) - .and(QSort.by(QPerson.lastname.desc())); ----- -==== - -ifdef::feature-scroll[] -include::repositories-scrolling.adoc[] -endif::[] - -[[repositories.limit-query-result]] -=== Limiting Query Results - -You can limit the results of query methods by using the `first` or `top` keywords, which you can use interchangeably. -You can append an optional numeric value to `top` or `first` to specify the maximum result size to be returned. -If the number is left out, a result size of 1 is assumed. -The following example shows how to limit the query size: - -.Limiting the result size of a query with `Top` and `First` -==== -[source,java] ----- -User findFirstByOrderByLastnameAsc(); - -User findTopByOrderByAgeDesc(); - -Page queryFirst10ByLastname(String lastname, Pageable pageable); - -Slice findTop3ByLastname(String lastname, Pageable pageable); - -List findFirst10ByLastname(String lastname, Sort sort); - -List findTop10ByLastname(String lastname, Pageable pageable); ----- -==== - -The limiting expressions also support the `Distinct` keyword for datastores that support distinct queries. -Also, for the queries that limit the result set to one instance, wrapping the result into with the `Optional` keyword is supported. - -If pagination or slicing is applied to a limiting query pagination (and the calculation of the number of available pages), it is applied within the limited result. - -NOTE: Limiting the results in combination with dynamic sorting by using a `Sort` parameter lets you express query methods for the 'K' smallest as well as for the 'K' biggest elements. diff --git a/src/main/asciidoc/spring-data-commons-docs/repositories.adoc b/src/main/asciidoc/spring-data-commons-docs/repositories.adoc deleted file mode 100644 index ecb7602f9..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repositories.adoc +++ /dev/null @@ -1,1644 +0,0 @@ -:spring-framework-docs: https://docs.spring.io/spring-framework/docs/5.3.x/reference/html -:spring-framework-javadoc: https://docs.spring.io/spring-framework/docs/current/javadoc-api - -ifndef::store[] -:store: Jpa -endif::[] - -[[repositories]] -= Working with Spring Data Repositories - -The goal of the Spring Data repository abstraction is to significantly reduce the amount of boilerplate code required to implement data access layers for various persistence stores. - -[IMPORTANT] -==== -This chapter explains the core concepts and interfaces of Spring Data repositories. -The information in this chapter is pulled from the Spring Data Commons module. -It uses the configuration and code samples for the Jakarta Persistence API (JPA) module. -ifeval::[{include-xml-namespaces} != false] -If you want to use XML configuration you should adapt the XML namespace declaration and the types to be extended to the equivalents of the particular module that you use. "`<>`" covers XML configuration, which is supported across all Spring Data modules that support the repository API. -endif::[] -"`<>`" covers the query method keywords supported by the repository abstraction in general. -For detailed information on the specific features of your module, see the chapter on that module of this document. -==== - -[[repositories.core-concepts]] -== Core concepts - -The central interface in the Spring Data repository abstraction is `Repository`. -It takes the domain class to manage as well as the identifier type of the domain class as type arguments. -This interface acts primarily as a marker interface to capture the types to work with and to help you to discover interfaces that extend this one. -The https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/CrudRepository.html[`CrudRepository`] and https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/ListCrudRepository.html[`ListCrudRepository`] interfaces provide sophisticated CRUD functionality for the entity class that is being managed. - -NOTE: "Entity" or "document" are the terms often used in Spring Data Aerospike interchangeably to describe a domain object - a Java object that identifies Aerospike DB record. - -[[repositories.repository]] -.`CrudRepository` Interface -==== -[source,java] ----- -public interface CrudRepository extends Repository { - - S save(S document); <1> - - Optional findById(ID primaryKey); <2> - - Iterable findAll(); <3> - - long count(); <4> - - void delete(T entity); <5> - - boolean existsById(ID primaryKey); <6> - - // … more functionality omitted. -} ----- -<1> Saves the given document of type `T`. -<2> Returns the DB record identified by the given ID, the record is mapped to a document of type `T`. -<3> Returns all DB records from the associated set mapped to documents of type `T`. -<4> Returns the number of all DB records in the associated set. -<5> Deletes a particular DB record using a document to identify it. -<6> Indicates whether a DB record with the given ID exists. -==== - -The methods declared in this interface are commonly referred to as CRUD methods. -`ListCrudRepository` offers equivalent methods, but they return `List` where the `CrudRepository` methods return an `Iterable`. - -NOTE: We also provide persistence technology-specific abstractions, such as `JpaRepository` or `MongoRepository`. -Those interfaces extend `CrudRepository` and expose the capabilities of the underlying persistence technology in addition to the rather generic persistence technology-agnostic interfaces such as `CrudRepository`. - -Additional to the `CrudRepository`, there is a https://docs.spring.io/spring-data/commons/docs/current/api/org/springframework/data/repository/PagingAndSortingRepository.html[`PagingAndSortingRepository`] abstraction that adds additional methods to ease paginated access to entities: - -.`PagingAndSortingRepository` interface -==== -[source,java] ----- -public interface PagingAndSortingRepository { - - Iterable findAll(Sort sort); - - Page findAll(Pageable pageable); -} ----- -==== - -To access the second page of `User` by a page size of 20, you could do something like the following: - -==== -[source,java] ----- -PagingAndSortingRepository repository = // … get access to a bean -Page users = repository.findAll(PageRequest.of(1, 20)); ----- -==== - -ifdef::feature-scroll[] -In addition to pagination, scrolling provides a more fine-grained access to iterate through chunks of larger result sets. -endif::[] - -In addition to query methods, query derivation for both count and delete queries is available. -The following list shows the interface definition for a derived count query: - -.Derived Count Query -==== -[source,java] ----- -interface UserRepository extends CrudRepository { - - long countByLastname(String lastname); -} ----- -==== - -The following listing shows the interface definition for a derived delete query: - -.Derived Delete Query -==== -[source,java] ----- -interface UserRepository extends CrudRepository { - - long deleteByLastname(String lastname); - - List removeByLastname(String lastname); -} ----- -==== - -[[repositories.query-methods]] -== Query Methods - -Standard CRUD functionality repositories usually have queries on the underlying datastore. -With Spring Data, declaring those queries becomes a four-step process: - -. Declare an interface extending Repository or one of its subinterfaces and type it to the domain class and ID type that it should handle, as shown in the following example: -+ -==== -[source,java] ----- -interface PersonRepository extends Repository { … } ----- -==== - -. Declare query methods on the interface. -+ -==== -[source,java] ----- -interface PersonRepository extends Repository { - List findByLastname(String lastname); -} ----- -==== - -. Set up Spring to create proxy instances for those interfaces, either with <> or with <>. -+ -==== -.Java -[source,java,subs="attributes,specialchars",role="primary"] ----- -import org.springframework.data.….repository.config.Enable{store}Repositories; - -@Enable{store}Repositories -class Config { … } ----- - -ifeval::[{include-xml-namespaces} != false] -.XML -[source,xml,role="secondary"] ----- - - - - - - ----- -endif::[] -==== -+ -ifeval::[{include-xml-namespaces} != false] -The JPA namespace is used in this example. -If you use the repository abstraction for any other store, you need to change this to the appropriate namespace declaration of your store module. -In other words, you should exchange `jpa` in favor of, for example, `mongodb`. -endif::[] -+ -Note that the JavaConfig variant does not configure a package explicitly, because the package of the annotated class is used by default. -To customize the package to scan, use one of the `basePackage…` attributes of the data-store-specific repository's `@Enable{store}Repositories`-annotation. -. Inject the repository instance and use it, as shown in the following example: -+ -==== -[source,java] ----- -class SomeClient { - - private final PersonRepository repository; - - SomeClient(PersonRepository repository) { - this.repository = repository; - } - - void doSomething() { - List persons = repository.findByLastname("Matthews"); - } -} ----- -==== - -The sections that follow explain each step in detail: - -* <> -* <> -* <> -* <> - -[[repositories.definition]] -== Defining Repository Interfaces - -To define a repository interface, you first need to define a domain class-specific repository interface. -The interface must extend `Repository` and be typed to the domain class and an ID type. -If you want to expose CRUD methods for that domain type, you may extend `CrudRepository`, or one of its variants instead of `Repository`. - -[[repositories.definition-tuning]] -== Fine-tuning Repository Definition - -There are a few variants how you can get started with your repository interface. - -The typical approach is to extend `CrudRepository`, which gives you methods for CRUD functionality. -CRUD stands for Create, Read, Update, Delete. -With version 3.0 we also introduced `ListCrudRepository` which is very similar to the `CrudRepository` but for those methods that return multiple entities it returns a `List` instead of an `Iterable` which you might find easier to use. - -If you are using a reactive store you might choose `ReactiveCrudRepository`, or `RxJava3CrudRepository` depending on which reactive framework you are using. - -If you are using Kotlin you might pick `CoroutineCrudRepository` which utilizes Kotlin's coroutines. - -Additional you can extend `PagingAndSortingRepository`, `ReactiveSortingRepository`, `RxJava3SortingRepository`, or `CoroutineSortingRepository` if you need methods that allow to specify a `Sort` abstraction or in the first case a `Pageable` abstraction. -Note that the various sorting repositories no longer extended their respective CRUD repository as they did in Spring Data Versions pre 3.0. -Therefore, you need to extend both interfaces if you want functionality of both. - -If you do not want to extend Spring Data interfaces, you can also annotate your repository interface with `@RepositoryDefinition`. -Extending one of the CRUD repository interfaces exposes a complete set of methods to manipulate your entities. -If you prefer to be selective about the methods being exposed, copy the methods you want to expose from the CRUD repository into your domain repository. -When doing so, you may change the return type of methods. -Spring Data will honor the return type if possible. -For example, for methods returning multiple entities you may choose `Iterable`, `List`, `Collection` or a VAVR list. - -If many repositories in your application should have the same set of methods you can define your own base interface to inherit from. -Such an interface must be annotated with `@NoRepositoryBean`. -This prevents Spring Data to try to create an instance of it directly and failing because it can't determine the entity for that repository, since it still contains a generic type variable. - -The following example shows how to selectively expose CRUD methods (`findById` and `save`, in this case): - -.Selectively exposing CRUD methods -==== -[source,java] ----- -@NoRepositoryBean -interface MyBaseRepository extends Repository { - - Optional findById(ID id); - - S save(S entity); -} - -interface UserRepository extends MyBaseRepository { - User findByEmailAddress(EmailAddress emailAddress); -} ----- -==== - -In the prior example, you defined a common base interface for all your domain repositories and exposed `findById(…)` as well as `save(…)`.These methods are routed into the base repository implementation of the store of your choice provided by Spring Data (for example, if you use JPA, the implementation is `SimpleJpaRepository`), because they match the method signatures in `CrudRepository`. -So the `UserRepository` can now save users, find individual users by ID, and trigger a query to find `Users` by email address. - -NOTE: The intermediate repository interface is annotated with `@NoRepositoryBean`. -Make sure you add that annotation to all repository interfaces for which Spring Data should not create instances at runtime. - -[[repositories.multiple-modules]] -== Using Repositories with Multiple Spring Data Modules - -Using a unique Spring Data module in your application makes things simple, because all repository interfaces in the defined scope are bound to the Spring Data module. -Sometimes, applications require using more than one Spring Data module. -In such cases, a repository definition must distinguish between persistence technologies. -When it detects multiple repository factories on the class path, Spring Data enters strict repository configuration mode. -Strict configuration uses details on the repository or the domain class to decide about Spring Data module binding for a repository definition: - -. If the repository definition <>, it is a valid candidate for the particular Spring Data module. -. If the domain class is <>, it is a valid candidate for the particular Spring Data module. -Spring Data modules accept either third-party annotations (such as JPA's `@Entity`) or provide their own annotations (such as `@Document` for Spring Data MongoDB and Spring Data Elasticsearch). - -The following example shows a repository that uses module-specific interfaces (JPA in this case): - -[[repositories.multiple-modules.types]] -.Repository definitions using module-specific interfaces -==== -[source,java] ----- -interface MyRepository extends JpaRepository { } - -@NoRepositoryBean -interface MyBaseRepository extends JpaRepository { … } - -interface UserRepository extends MyBaseRepository { … } ----- - -`MyRepository` and `UserRepository` extend `JpaRepository` in their type hierarchy. -They are valid candidates for the Spring Data JPA module. -==== - -The following example shows a repository that uses generic interfaces: - -.Repository definitions using generic interfaces -==== -[source,java] ----- -interface AmbiguousRepository extends Repository { … } - -@NoRepositoryBean -interface MyBaseRepository extends CrudRepository { … } - -interface AmbiguousUserRepository extends MyBaseRepository { … } ----- - -`AmbiguousRepository` and `AmbiguousUserRepository` extend only `Repository` and `CrudRepository` in their type hierarchy. -While this is fine when using a unique Spring Data module, multiple modules cannot distinguish to which particular Spring Data these repositories should be bound. -==== - -The following example shows a repository that uses domain classes with annotations: - -[[repositories.multiple-modules.annotations]] -.Repository definitions using domain classes with annotations -==== -[source,java] ----- -interface PersonRepository extends Repository { … } - -@Entity -class Person { … } - -interface UserRepository extends Repository { … } - -@Document -class User { … } ----- - -`PersonRepository` references `Person`, which is annotated with the JPA `@Entity` annotation, so this repository clearly belongs to Spring Data JPA. `UserRepository` references `User`, which is annotated with Spring Data MongoDB's `@Document` annotation. -==== - -The following bad example shows a repository that uses domain classes with mixed annotations: - -.Repository definitions using domain classes with mixed annotations -==== -[source,java] ----- -interface JpaPersonRepository extends Repository { … } - -interface MongoDBPersonRepository extends Repository { … } - -@Entity -@Document -class Person { … } ----- - -This example shows a domain class using both JPA and Spring Data MongoDB annotations. -It defines two repositories, `JpaPersonRepository` and `MongoDBPersonRepository`. -One is intended for JPA and the other for MongoDB usage. -Spring Data is no longer able to tell the repositories apart, which leads to undefined behavior. -==== - -<> and <> are used for strict repository configuration to identify repository candidates for a particular Spring Data module. -Using multiple persistence technology-specific annotations on the same domain type is possible and enables reuse of domain types across multiple persistence technologies. -However, Spring Data can then no longer determine a unique module with which to bind the repository. - -The last way to distinguish repositories is by scoping repository base packages. -Base packages define the starting points for scanning for repository interface definitions, which implies having repository definitions located in the appropriate packages. -By default, annotation-driven configuration uses the package of the configuration class. -The <> is mandatory. - -The following example shows annotation-driven configuration of base packages: - -.Annotation-driven configuration of base packages -==== -[source,java] ----- -@EnableJpaRepositories(basePackages = "com.acme.repositories.jpa") -@EnableMongoRepositories(basePackages = "com.acme.repositories.mongo") -class Configuration { … } ----- -==== - -[[repositories.query-methods.details]] -== Defining Query Methods - -The repository proxy has two ways to derive a store-specific query from the method name: - -* By deriving the query from the method name directly. -* By using a manually defined query. - -Available options depend on the actual store. -However, there must be a strategy that decides what actual query is created. -The next section describes the available options. - -[[repositories.query-methods.query-lookup-strategies]] -== Query Lookup Strategies - -The following strategies are available for the repository infrastructure to resolve the query. -ifeval::[{include-xml-namespaces} != false] -With XML configuration, you can configure the strategy at the namespace through the `query-lookup-strategy` attribute. -endif::[] -For Java configuration, you can use the `queryLookupStrategy` attribute of the `Enable{store}Repositories` annotation. -Some strategies may not be supported for particular datastores. - -- `CREATE` attempts to construct a store-specific query from the query method name. -The general approach is to remove a given set of well known prefixes from the method name and parse the rest of the method. -You can read more about query construction in "`<>`". - -- `USE_DECLARED_QUERY` tries to find a declared query and throws an exception if it cannot find one. -The query can be defined by an annotation somewhere or declared by other means. -See the documentation of the specific store to find available options for that store. -If the repository infrastructure does not find a declared query for the method at bootstrap time, it fails. - -- `CREATE_IF_NOT_FOUND` (the default) combines `CREATE` and `USE_DECLARED_QUERY`. -It looks up a declared query first, and, if no declared query is found, it creates a custom method name-based query. -This is the default lookup strategy and, thus, is used if you do not configure anything explicitly. -It allows quick query definition by method names but also custom-tuning of these queries by introducing declared queries as needed. - -[[repositories.query-methods.query-creation]] -== Query Creation - -The query builder mechanism built into the Spring Data repository infrastructure is useful for building constraining queries over entities of the repository. - -The following example shows how to create a number of queries: - -.Query creation from method names -==== -[source,java] ----- -interface PersonRepository extends Repository { - - List findByEmailAddressAndLastname(EmailAddress emailAddress, String lastname); - - // Enables the distinct flag for the query - List findDistinctPeopleByLastnameOrFirstname(String lastname, String firstname); - List findPeopleDistinctByLastnameOrFirstname(String lastname, String firstname); - - // Enabling ignoring case for an individual property - List findByLastnameIgnoreCase(String lastname); - // Enabling ignoring case for all suitable properties - List findByLastnameAndFirstnameAllIgnoreCase(String lastname, String firstname); - - // Enabling static ORDER BY for a query - List findByLastnameOrderByFirstnameAsc(String lastname); - List findByLastnameOrderByFirstnameDesc(String lastname); -} ----- -==== - -Parsing query method names is divided into subject and predicate. -The first part (`find…By`, `exists…By`) defines the subject of the query, the second part forms the predicate. -The introducing clause (subject) can contain further expressions. -Any text between `find` (or other introducing keywords) and `By` is considered to be descriptive unless using one of the result-limiting keywords such as a `Distinct` to set a distinct flag on the query to be created or <>. - -The appendix contains the <> and <>. -However, the first `By` acts as a delimiter to indicate the start of the actual criteria predicate. -At a very basic level, you can define conditions on entity properties and concatenate them with `And` and `Or`. - -The actual result of parsing the method depends on the persistence store for which you create the query. -However, there are some general things to notice: - -- The expressions are usually property traversals combined with operators that can be concatenated. -You can combine property expressions with `AND` and `OR`. -You also get support for operators such as `Between`, `LessThan`, `GreaterThan`, and `Like` for the property expressions. -The supported operators can vary by datastore, so consult the appropriate part of your reference documentation. - -- The method parser supports setting an `IgnoreCase` flag for individual properties (for example, `findByLastnameIgnoreCase(…)`) or for all properties of a type that supports ignoring case (usually `String` instances -- for example, `findByLastnameAndFirstnameAllIgnoreCase(…)`). -Whether ignoring cases is supported may vary by store, so consult the relevant sections in the reference documentation for the store-specific query method. - -- You can apply static ordering by appending an `OrderBy` clause to the query method that references a property and by providing a sorting direction (`Asc` or `Desc`). -To create a query method that supports dynamic sorting, see "`<>`". - -[[repositories.query-methods.query-property-expressions]] -== Property Expressions - -Property expressions can refer only to a direct property of the managed entity, as shown in the preceding example. -At query creation time, you already make sure that the parsed property is a property of the managed domain class. -However, you can also define constraints by traversing nested properties. -Consider the following method signature: - -==== -[source,java] ----- -List findByAddressZipCode(ZipCode zipCode); ----- -==== - -Assume a `Person` has an `Address` with a `ZipCode`. -In that case, the method creates the `x.address.zipCode` property traversal. -The resolution algorithm starts by interpreting the entire part (`AddressZipCode`) as the property and checks the domain class for a property with that name (uncapitalized). -If the algorithm succeeds, it uses that property. -If not, the algorithm splits up the source at the camel-case parts from the right side into a head and a tail and tries to find the corresponding property -- in our example, `AddressZip` and `Code`. -If the algorithm finds a property with that head, it takes the tail and continues building the tree down from there, splitting the tail up in the way just described. -If the first split does not match, the algorithm moves the split point to the left (`Address`, `ZipCode`) and continues. - -Although this should work for most cases, it is possible for the algorithm to select the wrong property. -Suppose the `Person` class has an `addressZip` property as well. -The algorithm would match in the first split round already, choose the wrong property, and fail (as the type of `addressZip` probably has no `code` property). - -To resolve this ambiguity you can use `_` inside your method name to manually define traversal points. -So our method name would be as follows: - -==== -[source,java] ----- -List findByAddress_ZipCode(ZipCode zipCode); ----- -==== - -Because we treat the underscore character as a reserved character, we strongly advise following standard Java naming conventions (that is, not using underscores in property names but using camel case instead). - -include::repositories-paging-sorting.adoc[] - -[[repositories.collections-and-iterables]] -== Repository Methods Returning Collections or Iterables - -Query methods that return multiple results can use standard Java `Iterable`, `List`, and `Set`. -Beyond that, we support returning Spring Data's `Streamable`, a custom extension of `Iterable`, as well as collection types provided by https://www.vavr.io/[Vavr]. -Refer to the appendix explaining all possible <>. - -[[repositories.collections-and-iterables.streamable]] -=== Using Streamable as Query Method Return Type - -You can use `Streamable` as alternative to `Iterable` or any collection type. -It provides convenience methods to access a non-parallel `Stream` (missing from `Iterable`) and the ability to directly `….filter(…)` and `….map(…)` over the elements and concatenate the `Streamable` to others: - -.Using Streamable to combine query method results -==== -[source,java] ----- -interface PersonRepository extends Repository { - Streamable findByFirstnameContaining(String firstname); - Streamable findByLastnameContaining(String lastname); -} - -Streamable result = repository.findByFirstnameContaining("av") - .and(repository.findByLastnameContaining("ea")); ----- -==== - -[[repositories.collections-and-iterables.streamable-wrapper]] -=== Returning Custom Streamable Wrapper Types - -Providing dedicated wrapper types for collections is a commonly used pattern to provide an API for a query result that returns multiple elements. -Usually, these types are used by invoking a repository method returning a collection-like type and creating an instance of the wrapper type manually. -You can avoid that additional step as Spring Data lets you use these wrapper types as query method return types if they meet the following criteria: - -. The type implements `Streamable`. -. The type exposes either a constructor or a static factory method named `of(…)` or `valueOf(…)` that takes `Streamable` as an argument. - -The following listing shows an example: - -==== -[source,java] ----- -class Product { <1> - MonetaryAmount getPrice() { … } -} - -@RequiredArgsConstructor(staticName = "of") -class Products implements Streamable { <2> - - private final Streamable streamable; - - public MonetaryAmount getTotal() { <3> - return streamable.stream() - .map(Priced::getPrice) - .reduce(Money.of(0), MonetaryAmount::add); - } - - - @Override - public Iterator iterator() { <4> - return streamable.iterator(); - } -} - -interface ProductRepository implements Repository { - Products findAllByDescriptionContaining(String text); <5> -} ----- -<1> A `Product` entity that exposes API to access the product's price. -<2> A wrapper type for a `Streamable` that can be constructed by using `Products.of(…)` (factory method created with the Lombok annotation). - A standard constructor taking the `Streamable` will do as well. -<3> The wrapper type exposes an additional API, calculating new values on the `Streamable`. -<4> Implement the `Streamable` interface and delegate to the actual result. -<5> That wrapper type `Products` can be used directly as a query method return type. -You do not need to return `Streamable` and manually wrap it after the query in the repository client. -==== - -[[repositories.collections-and-iterables.vavr]] -=== Support for Vavr Collections - -https://www.vavr.io/[Vavr] is a library that embraces functional programming concepts in Java. -It ships with a custom set of collection types that you can use as query method return types, as the following table shows: - -[options=header] -|==== -|Vavr collection type|Used Vavr implementation type|Valid Java source types -|`io.vavr.collection.Seq`|`io.vavr.collection.List`|`java.util.Iterable` -|`io.vavr.collection.Set`|`io.vavr.collection.LinkedHashSet`|`java.util.Iterable` -|`io.vavr.collection.Map`|`io.vavr.collection.LinkedHashMap`|`java.util.Map` -|==== - -You can use the types in the first column (or subtypes thereof) as query method return types and get the types in the second column used as implementation type, depending on the Java type of the actual query result (third column). -Alternatively, you can declare `Traversable` (the Vavr `Iterable` equivalent), and we then derive the implementation class from the actual return value. -That is, a `java.util.List` is turned into a Vavr `List` or `Seq`, a `java.util.Set` becomes a Vavr `LinkedHashSet` `Set`, and so on. - - -[[repositories.query-streaming]] -== Streaming Query Results - -You can process the results of query methods incrementally by using a Java 8 `Stream` as the return type. -Instead of wrapping the query results in a `Stream`, data store-specific methods are used to perform the streaming, as shown in the following example: - -.Stream the result of a query with Java 8 `Stream` -==== -[source,java] ----- -@Query("select u from User u") -Stream findAllByCustomQueryAndStream(); - -Stream readAllByFirstnameNotNull(); - -@Query("select u from User u") -Stream streamAllPaged(Pageable pageable); ----- -==== - -NOTE: A `Stream` potentially wraps underlying data store-specific resources and must, therefore, be closed after usage. -You can either manually close the `Stream` by using the `close()` method or by using a Java 7 `try-with-resources` block, as shown in the following example: - -.Working with a `Stream` result in a `try-with-resources` block -==== -[source,java] ----- -try (Stream stream = repository.findAllByCustomQueryAndStream()) { - stream.forEach(…); -} ----- -==== - -NOTE: Not all Spring Data modules currently support `Stream` as a return type. - -include::repositories-null-handling.adoc[] - -[[repositories.query-async]] -== Asynchronous Query Results - -You can run repository queries asynchronously by using {spring-framework-docs}/integration.html#scheduling[Spring's asynchronous method running capability]. -This means the method returns immediately upon invocation while the actual query occurs in a task that has been submitted to a Spring `TaskExecutor`. -Asynchronous queries differ from reactive queries and should not be mixed. -See the store-specific documentation for more details on reactive support. -The following example shows a number of asynchronous queries: - -==== -[source,java] ----- -@Async -Future findByFirstname(String firstname); <1> - -@Async -CompletableFuture findOneByFirstname(String firstname); <2> ----- -<1> Use `java.util.concurrent.Future` as the return type. -<2> Use a Java 8 `java.util.concurrent.CompletableFuture` as the return type. -==== - -[[repositories.create-instances]] -== Creating Repository Instances - -This section covers how to create instances and bean definitions for the defined repository interfaces. - -[[repositories.create-instances.java-config]] -== Java Configuration - -Use the store-specific `@Enable{store}Repositories` annotation on a Java configuration class to define a configuration for repository activation. -For an introduction to Java-based configuration of the Spring container, see {spring-framework-docs}/core.html#beans-java[JavaConfig in the Spring reference documentation]. - -A sample configuration to enable Spring Data repositories resembles the following: - -.Sample annotation-based repository configuration -==== -[source,java] ----- -@Configuration -@EnableJpaRepositories("com.acme.repositories") -class ApplicationConfiguration { - - @Bean - EntityManagerFactory entityManagerFactory() { - // … - } -} ----- -==== - -NOTE: The preceding example uses the JPA-specific annotation, which you would change according to the store module you actually use. The same applies to the definition of the `EntityManagerFactory` bean. See the sections covering the store-specific configuration. - -ifeval::[{include-xml-namespaces} != false] -[[repositories.create-instances.spring]] -[[repositories.create-instances.xml]] -== XML Configuration - -Each Spring Data module includes a `repositories` element that lets you define a base package that Spring scans for you, as shown in the following example: - -.Enabling Spring Data repositories via XML -==== -[source,xml] ----- - - - - - - ----- -==== - -In the preceding example, Spring is instructed to scan `com.acme.repositories` and all its sub-packages for interfaces extending `Repository` or one of its sub-interfaces. -For each interface found, the infrastructure registers the persistence technology-specific `FactoryBean` to create the appropriate proxies that handle invocations of the query methods. -Each bean is registered under a bean name that is derived from the interface name, so an interface of `UserRepository` would be registered under `userRepository`. -Bean names for nested repository interfaces are prefixed with their enclosing type name. -The base package attribute allows wildcards so that you can define a pattern of scanned packages. -endif::[] - -[[repositories.using-filters]] -== Using Filters - -By default, the infrastructure picks up every interface that extends the persistence technology-specific `Repository` sub-interface located under the configured base package and creates a bean instance for it. -However, you might want more fine-grained control over which interfaces have bean instances created for them. -To do so, use filter elements inside the repository declaration. -The semantics are exactly equivalent to the elements in Spring's component filters. -For details, see the {spring-framework-docs}/core.html#beans-scanning-filters[Spring reference documentation] for these elements. - -For example, to exclude certain interfaces from instantiation as repository beans, you could use the following configuration: - -.Using filters -==== -.Java -[source,java,subs="attributes,specialchars",role="primary"] ----- -@Configuration -@Enable{store}Repositories(basePackages = "com.acme.repositories", - includeFilters = { @Filter(type = FilterType.REGEX, pattern = ".*SomeRepository") }, - excludeFilters = { @Filter(type = FilterType.REGEX, pattern = ".*SomeOtherRepository") }) -class ApplicationConfiguration { - - @Bean - EntityManagerFactory entityManagerFactory() { - // … - } -} ----- - -ifeval::[{include-xml-namespaces} != false] -.XML -[source,xml,role="secondary"] ----- - - - - ----- -endif::[] -==== - -The preceding example excludes all interfaces ending in `SomeRepository` from being instantiated and includes those ending with `SomeOtherRepository`. - - -[[repositories.create-instances.standalone]] -== Standalone Usage - -You can also use the repository infrastructure outside of a Spring container -- for example, in CDI environments. You still need some Spring libraries in your classpath, but, generally, you can set up repositories programmatically as well. The Spring Data modules that provide repository support ship with a persistence technology-specific `RepositoryFactory` that you can use, as follows: - -.Standalone usage of the repository factory -==== -[source,java] ----- -RepositoryFactorySupport factory = … // Instantiate factory here -UserRepository repository = factory.getRepository(UserRepository.class); ----- -==== - -[[repositories.custom-implementations]] -== Custom Implementations for Spring Data Repositories - -Spring Data provides various options to create query methods with little coding. -But when those options don't fit your needs you can also provide your own custom implementation for repository methods. -This section describes how to do that. - -[[repositories.single-repository-behavior]] -== Customizing Individual Repositories - -To enrich a repository with custom functionality, you must first define a fragment interface and an implementation for the custom functionality, as follows: - -.Interface for custom repository functionality -==== -[source,java] ----- -interface CustomizedUserRepository { - void someCustomMethod(User user); -} ----- -==== - -.Implementation of custom repository functionality -==== -[source,java] ----- -class CustomizedUserRepositoryImpl implements CustomizedUserRepository { - - public void someCustomMethod(User user) { - // Your custom implementation - } -} ----- -==== - -NOTE: The most important part of the class name that corresponds to the fragment interface is the `Impl` postfix. - -The implementation itself does not depend on Spring Data and can be a regular Spring bean. -Consequently, you can use standard dependency injection behavior to inject references to other beans (such as a `JdbcTemplate`), take part in aspects, and so on. - -Then you can let your repository interface extend the fragment interface, as follows: - -.Changes to your repository interface -==== -[source,java] ----- -interface UserRepository extends CrudRepository, CustomizedUserRepository { - - // Declare query methods here -} ----- -==== - -Extending the fragment interface with your repository interface combines the CRUD and custom functionality and makes it available to clients. - -Spring Data repositories are implemented by using fragments that form a repository composition. -Fragments are the base repository, functional aspects (such as <>), and custom interfaces along with their implementations. -Each time you add an interface to your repository interface, you enhance the composition by adding a fragment. -The base repository and repository aspect implementations are provided by each Spring Data module. - -The following example shows custom interfaces and their implementations: - -.Fragments with their implementations -==== -[source,java] ----- -interface HumanRepository { - void someHumanMethod(User user); -} - -class HumanRepositoryImpl implements HumanRepository { - - public void someHumanMethod(User user) { - // Your custom implementation - } -} - -interface ContactRepository { - - void someContactMethod(User user); - - User anotherContactMethod(User user); -} - -class ContactRepositoryImpl implements ContactRepository { - - public void someContactMethod(User user) { - // Your custom implementation - } - - public User anotherContactMethod(User user) { - // Your custom implementation - } -} ----- -==== - -The following example shows the interface for a custom repository that extends `CrudRepository`: - -.Changes to your repository interface -==== -[source,java] ----- -interface UserRepository extends CrudRepository, HumanRepository, ContactRepository { - - // Declare query methods here -} ----- -==== - -Repositories may be composed of multiple custom implementations that are imported in the order of their declaration. -Custom implementations have a higher priority than the base implementation and repository aspects. -This ordering lets you override base repository and aspect methods and resolves ambiguity if two fragments contribute the same method signature. -Repository fragments are not limited to use in a single repository interface. -Multiple repositories may use a fragment interface, letting you reuse customizations across different repositories. - -The following example shows a repository fragment and its implementation: - -.Fragments overriding `save(…)` -==== -[source,java] ----- -interface CustomizedSave { - S save(S entity); -} - -class CustomizedSaveImpl implements CustomizedSave { - - public S save(S entity) { - // Your custom implementation - } -} ----- -==== - -The following example shows a repository that uses the preceding repository fragment: - -.Customized repository interfaces -==== -[source,java] ----- -interface UserRepository extends CrudRepository, CustomizedSave { -} - -interface PersonRepository extends CrudRepository, CustomizedSave { -} ----- -==== - -[[repositories.configuration]] -=== Configuration - -The repository infrastructure tries to autodetect custom implementation fragments by scanning for classes below the package in which it found a repository. -These classes need to follow the naming convention of appending a postfix defaulting to `Impl`. - -The following example shows a repository that uses the default postfix and a repository that sets a custom value for the postfix: - -.Configuration example -==== -.Java -[source,java,subs="attributes,specialchars",role="primary"] ----- -@Enable{store}Repositories(repositoryImplementationPostfix = "MyPostfix") -class Configuration { … } ----- - -ifeval::[{include-xml-namespaces} != false] -.XML -[source,xml,role="secondary"] ----- - - - ----- -endif::[] -==== - -The first configuration in the preceding example tries to look up a class called `com.acme.repository.CustomizedUserRepositoryImpl` to act as a custom repository implementation. -The second example tries to look up `com.acme.repository.CustomizedUserRepositoryMyPostfix`. - -[[repositories.single-repository-behaviour.ambiguity]] -==== Resolution of Ambiguity - -If multiple implementations with matching class names are found in different packages, Spring Data uses the bean names to identify which one to use. - -Given the following two custom implementations for the `CustomizedUserRepository` shown earlier, the first implementation is used. -Its bean name is `customizedUserRepositoryImpl`, which matches that of the fragment interface (`CustomizedUserRepository`) plus the postfix `Impl`. - -.Resolution of ambiguous implementations -==== -[source,java] ----- -package com.acme.impl.one; - -class CustomizedUserRepositoryImpl implements CustomizedUserRepository { - - // Your custom implementation -} ----- - -[source,java] ----- -package com.acme.impl.two; - -@Component("specialCustomImpl") -class CustomizedUserRepositoryImpl implements CustomizedUserRepository { - - // Your custom implementation -} ----- -==== - -If you annotate the `UserRepository` interface with `@Component("specialCustom")`, the bean name plus `Impl` then matches the one defined for the repository implementation in `com.acme.impl.two`, and it is used instead of the first one. - -[[repositories.manual-wiring]] -==== Manual Wiring - -If your custom implementation uses annotation-based configuration and autowiring only, the preceding approach shown works well, because it is treated as any other Spring bean. -If your implementation fragment bean needs special wiring, you can declare the bean and name it according to the conventions described in the <>. -The infrastructure then refers to the manually defined bean definition by name instead of creating one itself. -The following example shows how to manually wire a custom implementation: - -.Manual wiring of custom implementations -==== - -.Java -[source,java,role="primary"] ----- -class MyClass { - MyClass(@Qualifier("userRepositoryImpl") UserRepository userRepository) { - … - } -} ----- - -ifeval::[{include-xml-namespaces} != false] -.XML -[source,xml,role="secondary"] ----- - - - - - ----- -endif::[] - -==== - -[[repositories.customize-base-repository]] -== Customize the Base Repository - -The approach described in the <> requires customization of each repository interfaces when you want to customize the base repository behavior so that all repositories are affected. -To instead change behavior for all repositories, you can create an implementation that extends the persistence technology-specific repository base class. -This class then acts as a custom base class for the repository proxies, as shown in the following example: - -.Custom repository base class -==== -[source,java] ----- -class MyRepositoryImpl - extends SimpleJpaRepository { - - private final EntityManager entityManager; - - MyRepositoryImpl(JpaEntityInformation entityInformation, - EntityManager entityManager) { - super(entityInformation, entityManager); - - // Keep the EntityManager around to used from the newly introduced methods. - this.entityManager = entityManager; - } - - @Transactional - public S save(S entity) { - // implementation goes here - } -} ----- -==== - -CAUTION: The class needs to have a constructor of the super class which the store-specific repository factory implementation uses. -If the repository base class has multiple constructors, override the one taking an `EntityInformation` plus a store specific infrastructure object (such as an `EntityManager` or a template class). - -The final step is to make the Spring Data infrastructure aware of the customized repository base class. -In configuration, you can do so by using the `repositoryBaseClass`, as shown in the following example: - -.Configuring a custom repository base class -==== -.Java -[source,java,subs="attributes,specialchars",role="primary"] ----- -@Configuration -@Enable{store}Repositories(repositoryBaseClass = MyRepositoryImpl.class) -class ApplicationConfiguration { … } ----- - -ifeval::[{include-xml-namespaces} != false] -.XML -[source,xml,role="secondary"] ----- - ----- -endif::[] -==== - -[[core.domain-events]] -== Publishing Events from Aggregate Roots - -Entities managed by repositories are aggregate roots. -In a Domain-Driven Design application, these aggregate roots usually publish domain events. -Spring Data provides an annotation called `@DomainEvents` that you can use on a method of your aggregate root to make that publication as easy as possible, as shown in the following example: - -.Exposing domain events from an aggregate root -==== -[source,java] ----- -class AnAggregateRoot { - - @DomainEvents <1> - Collection domainEvents() { - // … return events you want to get published here - } - - @AfterDomainEventPublication <2> - void callbackMethod() { - // … potentially clean up domain events list - } -} ----- -<1> The method that uses `@DomainEvents` can return either a single event instance or a collection of events. -It must not take any arguments. -<2> After all events have been published, we have a method annotated with `@AfterDomainEventPublication`. -You can use it to potentially clean the list of events to be published (among other uses). -==== - -The methods are called every time one of the following a Spring Data repository methods are called: - -* `save(…)`, `saveAll(…)` -* `delete(…)`, `deleteAll(…)`, `deleteAllInBatch(…)`, `deleteInBatch(…)` - -Note, that these methods take the aggregate root instances as arguments. -This is why `deleteById(…)` is notably absent, as the implementations might choose to issue a query deleting the instance, and thus we would never have access to the aggregate instance in the first place. - -[[core.extensions]] -== Spring Data Extensions - -This section documents a set of Spring Data extensions that enable Spring Data usage in a variety of contexts. -Currently, most of the integration is targeted towards Spring MVC. - -[[core.extensions.querydsl]] -== Querydsl Extension - -http://www.querydsl.com/[Querydsl] is a framework that enables the construction of statically typed SQL-like queries through its fluent API. - -Several Spring Data modules offer integration with Querydsl through `QuerydslPredicateExecutor`, as the following example shows: - -.QuerydslPredicateExecutor interface -==== -[source,java] ----- -public interface QuerydslPredicateExecutor { - - Optional findById(Predicate predicate); <1> - - Iterable findAll(Predicate predicate); <2> - - long count(Predicate predicate); <3> - - boolean exists(Predicate predicate); <4> - - // … more functionality omitted. -} ----- -<1> Finds and returns a single entity matching the `Predicate`. -<2> Finds and returns all entities matching the `Predicate`. -<3> Returns the number of entities matching the `Predicate`. -<4> Returns whether an entity that matches the `Predicate` exists. -==== - -To use the Querydsl support, extend `QuerydslPredicateExecutor` on your repository interface, as the following example shows: - -.Querydsl integration on repositories -==== -[source,java] ----- -interface UserRepository extends CrudRepository, QuerydslPredicateExecutor { -} ----- -==== - -The preceding example lets you write type-safe queries by using Querydsl `Predicate` instances, as the following example shows: - -[source,java] ----- -Predicate predicate = user.firstname.equalsIgnoreCase("dave") - .and(user.lastname.startsWithIgnoreCase("mathews")); - -userRepository.findAll(predicate); ----- - -[[core.web]] -== Web support - -Spring Data modules that support the repository programming model ship with a variety of web support. -The web related components require Spring MVC JARs to be on the classpath. -Some of them even provide integration with https://github.com/spring-projects/spring-hateoas[Spring HATEOAS]. -In general, the integration support is enabled by using the `@EnableSpringDataWebSupport` annotation in your JavaConfig configuration class, as the following example shows: - -.Enabling Spring Data web support -==== -.Java -[source,java,role="primary"] ----- -@Configuration -@EnableWebMvc -@EnableSpringDataWebSupport -class WebConfiguration {} ----- - -.XML -[source,xml,role="secondary"] ----- - - - - ----- -==== - -The `@EnableSpringDataWebSupport` annotation registers a few components. -We discuss those later in this section. -It also detects Spring HATEOAS on the classpath and registers integration components (if present) for it as well. - -.Enabling Spring Data web support in XML - -[[core.web.basic]] -=== Basic Web Support - -The configuration shown in the <> registers a few basic components: - -- A <> to let Spring MVC resolve instances of repository-managed domain classes from request parameters or path variables. -- <> implementations to let Spring MVC resolve `Pageable` and `Sort` instances from request parameters. -- <> to de-/serialize types like `Point` and `Distance`, or store specific ones, depending on the Spring Data Module used. - -[[core.web.basic.domain-class-converter]] -==== Using the `DomainClassConverter` Class - -The `DomainClassConverter` class lets you use domain types in your Spring MVC controller method signatures directly so that you need not manually lookup the instances through the repository, as the following example shows: - -.A Spring MVC controller using domain types in method signatures -==== -[source,java] ----- -@Controller -@RequestMapping("/users") -class UserController { - - @RequestMapping("/{id}") - String showUserForm(@PathVariable("id") User user, Model model) { - - model.addAttribute("user", user); - return "userForm"; - } -} ----- -==== - -The method receives a `User` instance directly, and no further lookup is necessary. -The instance can be resolved by letting Spring MVC convert the path variable into the `id` type of the domain class first and eventually access the instance through calling `findById(…)` on the repository instance registered for the domain type. - -NOTE: Currently, the repository has to implement `CrudRepository` to be eligible to be discovered for conversion. - -[[core.web.basic.paging-and-sorting]] -==== HandlerMethodArgumentResolvers for Pageable and Sort - -The configuration snippet shown in the <> also registers a `PageableHandlerMethodArgumentResolver` as well as an instance of `SortHandlerMethodArgumentResolver`. -The registration enables `Pageable` and `Sort` as valid controller method arguments, as the following example shows: - -.Using Pageable as a controller method argument -==== -[source,java] ----- -@Controller -@RequestMapping("/users") -class UserController { - - private final UserRepository repository; - - UserController(UserRepository repository) { - this.repository = repository; - } - - @RequestMapping - String showUsers(Model model, Pageable pageable) { - - model.addAttribute("users", repository.findAll(pageable)); - return "users"; - } -} ----- -==== - -The preceding method signature causes Spring MVC try to derive a `Pageable` instance from the request parameters by using the following default configuration: - -.Request parameters evaluated for `Pageable` instances -[options = "autowidth"] -|=== -|`page`|Page you want to retrieve. 0-indexed and defaults to 0. -|`size`|Size of the page you want to retrieve. Defaults to 20. -|`sort`|Properties that should be sorted by in the format `property,property(,ASC\|DESC)(,IgnoreCase)`. The default sort direction is case-sensitive ascending. Use multiple `sort` parameters if you want to switch direction or case sensitivity -- for example, `?sort=firstname&sort=lastname,asc&sort=city,ignorecase`. -|=== - -To customize this behavior, register a bean that implements the `PageableHandlerMethodArgumentResolverCustomizer` interface or the `SortHandlerMethodArgumentResolverCustomizer` interface, respectively. -Its `customize()` method gets called, letting you change settings, as the following example shows: - -==== -[source,java] ----- -@Bean SortHandlerMethodArgumentResolverCustomizer sortCustomizer() { - return s -> s.setPropertyDelimiter("<-->"); -} ----- -==== - -If setting the properties of an existing `MethodArgumentResolver` is not sufficient for your purpose, extend either `SpringDataWebConfiguration` or the HATEOAS-enabled equivalent, override the `pageableResolver()` or `sortResolver()` methods, and import your customized configuration file instead of using the `@Enable` annotation. - -If you need multiple `Pageable` or `Sort` instances to be resolved from the request (for multiple tables, for example), you can use Spring's `@Qualifier` annotation to distinguish one from another. -The request parameters then have to be prefixed with `${qualifier}_`. -The following example shows the resulting method signature: - -==== -[source,java] ----- -String showUsers(Model model, - @Qualifier("thing1") Pageable first, - @Qualifier("thing2") Pageable second) { … } ----- -==== - -You have to populate `thing1_page`, `thing2_page`, and so on. - -The default `Pageable` passed into the method is equivalent to a `PageRequest.of(0, 20)`, but you can customize it by using the `@PageableDefault` annotation on the `Pageable` parameter. - -[[core.web.pageables]] -=== Hypermedia Support for `Page` and `Slice` - -Spring HATEOAS ships with a representation model class (`PagedModel`/`SlicedModel`) that allows enriching the content of a `Page` or `Slice` instance with the necessary `Page`/`Slice` metadata as well as links to let the clients easily navigate the pages. -The conversion of a `Page` to a `PagedModel` is done by an implementation of the Spring HATEOAS `RepresentationModelAssembler` interface, called the `PagedResourcesAssembler`. -Similarly `Slice` instances can be converted to a `SlicedModel` using a `SlicedResourcesAssembler`. -The following example shows how to use a `PagedResourcesAssembler` as a controller method argument, as the `SlicedResourcesAssembler` works exactly the same: - -.Using a PagedResourcesAssembler as controller method argument -==== -[source,java] ----- -@Controller -class PersonController { - - private final PersonRepository repository; - - // Constructor omitted - - @GetMapping("/people") - HttpEntity> people(Pageable pageable, - PagedResourcesAssembler assembler) { - - Page people = repository.findAll(pageable); - return ResponseEntity.ok(assembler.toModel(people)); - } -} ----- -==== - -Enabling the configuration, as shown in the preceding example, lets the `PagedResourcesAssembler` be used as a controller method argument. -Calling `toModel(…)` on it has the following effects: - -* The content of the `Page` becomes the content of the `PagedModel` instance. -* The `PagedModel` object gets a `PageMetadata` instance attached, and it is populated with information from the `Page` and the underlying `Pageable`. -* The `PagedModel` may get `prev` and `next` links attached, depending on the page's state. -The links point to the URI to which the method maps. -The pagination parameters added to the method match the setup of the `PageableHandlerMethodArgumentResolver` to make sure the links can be resolved later. - -Assume we have 30 `Person` instances in the database. -You can now trigger a request (`GET http://localhost:8080/people`) and see output similar to the following: - -==== -[source,javascript] ----- -{ "links" : [ - { "rel" : "next", "href" : "http://localhost:8080/persons?page=1&size=20" } - ], - "content" : [ - … // 20 Person instances rendered here - ], - "pageMetadata" : { - "size" : 20, - "totalElements" : 30, - "totalPages" : 2, - "number" : 0 - } -} ----- -==== - -WARNING: The JSON envelope format shown here doesn't follow any formally specified structure and it's not guaranteed stable and we might change it at any time. -It's highly recommended to enable the rendering as a hypermedia-enabled, official media type, supported by Spring HATEOAS, like https://docs.spring.io/spring-hateoas/docs/{springHateoasVersion}/reference/html/#mediatypes.hal[HAL]. -Those can be activated by using its `@EnableHypermediaSupport` annotation. -Find more information in the https://docs.spring.io/spring-hateoas/docs/{springHateoasVersion}/reference/html/#configuration.at-enable[Spring HATEOAS reference documentation]. - -The assembler produced the correct URI and also picked up the default configuration to resolve the parameters into a `Pageable` for an upcoming request. -This means that, if you change that configuration, the links automatically adhere to the change. -By default, the assembler points to the controller method it was invoked in, but you can customize that by passing a custom `Link` to be used as base to build the pagination links, which overloads the `PagedResourcesAssembler.toModel(…)` method. - -[[core.web.basic.jackson-mappers]] -=== Spring Data Jackson Modules - -The core module, and some of the store specific ones, ship with a set of Jackson Modules for types, like `org.springframework.data.geo.Distance` and `org.springframework.data.geo.Point`, used by the Spring Data domain. + -Those Modules are imported once <> is enabled and `com.fasterxml.jackson.databind.ObjectMapper` is available. - -During initialization `SpringDataJacksonModules`, like the `SpringDataJacksonConfiguration`, get picked up by the infrastructure, so that the declared ``com.fasterxml.jackson.databind.Module``s are made available to the Jackson `ObjectMapper`. - -Data binding mixins for the following domain types are registered by the common infrastructure. ----- -org.springframework.data.geo.Distance -org.springframework.data.geo.Point -org.springframework.data.geo.Box -org.springframework.data.geo.Circle -org.springframework.data.geo.Polygon ----- - -[NOTE] -==== -The individual module may provide additional `SpringDataJacksonModules`. + -Please refer to the store specific section for more details. -==== - -[[core.web.binding]] -=== Web Databinding Support - -You can use Spring Data projections (described in <>) to bind incoming request payloads by using either https://goessner.net/articles/JsonPath/[JSONPath] expressions (requires https://github.com/json-path/JsonPath[Jayway JsonPath]) or https://www.w3.org/TR/xpath-31/[XPath] expressions (requires https://xmlbeam.org/[XmlBeam]), as the following example shows: - -.HTTP payload binding using JSONPath or XPath expressions -==== -[source,java] ----- -@ProjectedPayload -public interface UserPayload { - - @XBRead("//firstname") - @JsonPath("$..firstname") - String getFirstname(); - - @XBRead("/lastname") - @JsonPath({ "$.lastname", "$.user.lastname" }) - String getLastname(); -} ----- -==== - -You can use the type shown in the preceding example as a Spring MVC handler method argument or by using `ParameterizedTypeReference` on one of methods of the `RestTemplate`. -The preceding method declarations would try to find `firstname` anywhere in the given document. -The `lastname` XML lookup is performed on the top-level of the incoming document. -The JSON variant of that tries a top-level `lastname` first but also tries `lastname` nested in a `user` sub-document if the former does not return a value. -That way, changes in the structure of the source document can be mitigated easily without having clients calling the exposed methods (usually a drawback of class-based payload binding). - -Nested projections are supported as described in <>. -If the method returns a complex, non-interface type, a Jackson `ObjectMapper` is used to map the final value. - -For Spring MVC, the necessary converters are registered automatically as soon as `@EnableSpringDataWebSupport` is active and the required dependencies are available on the classpath. -For usage with `RestTemplate`, register a `ProjectingJackson2HttpMessageConverter` (JSON) or `XmlBeamHttpMessageConverter` manually. - -For more information, see the https://github.com/spring-projects/spring-data-examples/tree/main/web/projection[web projection example] in the canonical https://github.com/spring-projects/spring-data-examples[Spring Data Examples repository]. - -[[core.web.type-safe]] -=== Querydsl Web Support - -For those stores that have http://www.querydsl.com/[QueryDSL] integration, you can derive queries from the attributes contained in a `Request` query string. - -Consider the following query string: - -==== -[source,text] ----- -?firstname=Dave&lastname=Matthews ----- -==== - -Given the `User` object from the previous examples, you can resolve a query string to the following value by using the `QuerydslPredicateArgumentResolver`, as follows: - -==== -[source,text] ----- -QUser.user.firstname.eq("Dave").and(QUser.user.lastname.eq("Matthews")) ----- -==== - -NOTE: The feature is automatically enabled, along with `@EnableSpringDataWebSupport`, when Querydsl is found on the classpath. - -Adding a `@QuerydslPredicate` to the method signature provides a ready-to-use `Predicate`, which you can run by using the `QuerydslPredicateExecutor`. - -TIP: Type information is typically resolved from the method's return type. -Since that information does not necessarily match the domain type, it might be a good idea to use the `root` attribute of `QuerydslPredicate`. - -The following example shows how to use `@QuerydslPredicate` in a method signature: - -==== -[source,java] ----- -@Controller -class UserController { - - @Autowired UserRepository repository; - - @RequestMapping(value = "/", method = RequestMethod.GET) - String index(Model model, @QuerydslPredicate(root = User.class) Predicate predicate, <1> - Pageable pageable, @RequestParam MultiValueMap parameters) { - - model.addAttribute("users", repository.findAll(predicate, pageable)); - - return "index"; - } -} ----- -<1> Resolve query string arguments to matching `Predicate` for `User`. -==== - -The default binding is as follows: - -* `Object` on simple properties as `eq`. -* `Object` on collection like properties as `contains`. -* `Collection` on simple properties as `in`. - -You can customize those bindings through the `bindings` attribute of `@QuerydslPredicate` or by making use of Java 8 `default methods` and adding the `QuerydslBinderCustomizer` method to the repository interface, as follows: - -==== -[source,java] ----- -interface UserRepository extends CrudRepository, - QuerydslPredicateExecutor, <1> - QuerydslBinderCustomizer { <2> - - @Override - default void customize(QuerydslBindings bindings, QUser user) { - - bindings.bind(user.username).first((path, value) -> path.contains(value)) <3> - bindings.bind(String.class) - .first((StringPath path, String value) -> path.containsIgnoreCase(value)); <4> - bindings.excluding(user.password); <5> - } -} ----- -<1> `QuerydslPredicateExecutor` provides access to specific finder methods for `Predicate`. -<2> `QuerydslBinderCustomizer` defined on the repository interface is automatically picked up and shortcuts `@QuerydslPredicate(bindings=...)`. -<3> Define the binding for the `username` property to be a simple `contains` binding. -<4> Define the default binding for `String` properties to be a case-insensitive `contains` match. -<5> Exclude the `password` property from `Predicate` resolution. -==== - -TIP: You can register a `QuerydslBinderCustomizerDefaults` bean holding default Querydsl bindings before applying specific bindings from the repository or `@QuerydslPredicate`. - -ifeval::[{include-xml-namespaces} != false] -[[core.repository-populators]] -== Repository Populators - -If you work with the Spring JDBC module, you are probably familiar with the support for populating a `DataSource` with SQL scripts. -A similar abstraction is available on the repositories level, although it does not use SQL as the data definition language because it must be store-independent. -Thus, the populators support XML (through Spring's OXM abstraction) and JSON (through Jackson) to define data with which to populate the repositories. - -Assume you have a file called `data.json` with the following content: - -.Data defined in JSON -==== -[source,javascript] ----- -[ { "_class" : "com.acme.Person", - "firstname" : "Dave", - "lastname" : "Matthews" }, - { "_class" : "com.acme.Person", - "firstname" : "Carter", - "lastname" : "Beauford" } ] ----- -==== - -You can populate your repositories by using the populator elements of the repository namespace provided in Spring Data Commons. -To populate the preceding data to your `PersonRepository`, declare a populator similar to the following: - -.Declaring a Jackson repository populator -==== -[source,xml] ----- - - - - - - ----- -==== - -The preceding declaration causes the `data.json` file to be read and deserialized by a Jackson `ObjectMapper`. - -The type to which the JSON object is unmarshalled is determined by inspecting the `_class` attribute of the JSON document. -The infrastructure eventually selects the appropriate repository to handle the object that was deserialized. - -To instead use XML to define the data the repositories should be populated with, you can use the `unmarshaller-populator` element. -You configure it to use one of the XML marshaller options available in Spring OXM. See the {spring-framework-docs}/data-access.html#oxm[Spring reference documentation] for details. -The following example shows how to unmarshall a repository populator with JAXB: - -.Declaring an unmarshalling repository populator (using JAXB) -==== -[source,xml] ----- - - - - - - - - ----- -==== -endif::[] diff --git a/src/main/asciidoc/spring-data-commons-docs/repository-namespace-reference.adoc b/src/main/asciidoc/spring-data-commons-docs/repository-namespace-reference.adoc deleted file mode 100644 index bcf7d09b4..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repository-namespace-reference.adoc +++ /dev/null @@ -1,18 +0,0 @@ -[[repositories.namespace-reference]] -[appendix] -= Namespace reference - -[[populator.namespace-dao-config]] -== The Element -The `` element triggers the setup of the Spring Data repository infrastructure. The most important attribute is `base-package`, which defines the package to scan for Spring Data repository interfaces. See "`<>`". The following table describes the attributes of the `` element: - -.Attributes -[options="header", cols="1,3"] -|=============== -|Name|Description -|`base-package`|Defines the package to be scanned for repository interfaces that extend `*Repository` (the actual interface is determined by the specific Spring Data module) in auto-detection mode. All packages below the configured package are scanned, too. Wildcards are allowed. -|`repository-impl-postfix`|Defines the postfix to autodetect custom repository implementations. Classes whose names end with the configured postfix are considered as candidates. Defaults to `Impl`. -|`query-lookup-strategy`|Determines the strategy to be used to create finder queries. See "`<>`" for details. Defaults to `create-if-not-found`. -|`named-queries-location`|Defines the location to search for a Properties file containing externally defined queries. -|`consider-nested-repositories`|Whether nested repository interface definitions should be considered. Defaults to `false`. -|=============== diff --git a/src/main/asciidoc/spring-data-commons-docs/repository-populator-namespace-reference.adoc b/src/main/asciidoc/spring-data-commons-docs/repository-populator-namespace-reference.adoc deleted file mode 100644 index 93a871940..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repository-populator-namespace-reference.adoc +++ /dev/null @@ -1,15 +0,0 @@ -[[populator.namespace-reference]] -[appendix] -= Populators namespace reference - -[[namespace-dao-config]] -== The Element -The `` element allows to populate a data store via the Spring Data repository infrastructure.footnote:[see <>] - -.Attributes -[options="header", cols="1,3"] -|=============== -|Name|Description -|`locations`|Where to find the files to read the objects from the repository shall be populated with. -|=============== - diff --git a/src/main/asciidoc/spring-data-commons-docs/repository-projections.adoc b/src/main/asciidoc/spring-data-commons-docs/repository-projections.adoc deleted file mode 100644 index 9d5a8b998..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repository-projections.adoc +++ /dev/null @@ -1,290 +0,0 @@ -ifndef::projection-collection[] -:projection-collection: Collection -endif::[] - -[[projections]] -= Projections - -Spring Data query methods usually return one or multiple instances of the aggregate root managed by the repository. -However, it might sometimes be desirable to create projections based on certain attributes of those types. -Spring Data allows modeling dedicated return types, to more selectively retrieve partial views of the managed aggregates. - -Imagine a repository and aggregate root type such as the following example: - -.A sample aggregate and repository -==== -[source, java, subs="+attributes"] ----- -class Person { - - @Id UUID id; - String firstname, lastname; - Address address; - - static class Address { - String zipCode, city, street; - } -} - -interface PersonRepository extends Repository { - - Collection findByLastname(String lastname); -} ----- -==== - -Now imagine that we want to retrieve the person's name attributes only. -What means does Spring Data offer to achieve this? The rest of this chapter answers that question. - -[[projections.interfaces]] -== Interface-based Projections - -The easiest way to limit the result of the queries to only the name attributes is by declaring an interface that exposes accessor methods for the properties to be read, as shown in the following example: - -.A projection interface to retrieve a subset of attributes -==== -[source, java] ----- -interface NamesOnly { - - String getFirstname(); - String getLastname(); -} ----- -==== - -The important bit here is that the properties defined here exactly match properties in the aggregate root. -Doing so lets a query method be added as follows: - -.A repository using an interface based projection with a query method -==== -[source, java, subs="+attributes"] ----- -interface PersonRepository extends Repository { - - Collection findByLastname(String lastname); -} ----- -==== - -The query execution engine creates proxy instances of that interface at runtime for each element returned and forwards calls to the exposed methods to the target object. - -NOTE: Declaring a method in your `Repository` that overrides a base method (e.g. declared in `CrudRepository`, a store-specific repository interface, or the `Simple…Repository`) results in a call to the base method regardless of the declared return type. Make sure to use a compatible return type as base methods cannot be used for projections. Some store modules support `@Query` annotations to turn an overridden base method into a query method that then can be used to return projections. - -[[projections.interfaces.nested]] -Projections can be used recursively. If you want to include some of the `Address` information as well, create a projection interface for that and return that interface from the declaration of `getAddress()`, as shown in the following example: - -.A projection interface to retrieve a subset of attributes -==== -[source, java] ----- -interface PersonSummary { - - String getFirstname(); - String getLastname(); - AddressSummary getAddress(); - - interface AddressSummary { - String getCity(); - } -} ----- -==== - -On method invocation, the `address` property of the target instance is obtained and wrapped into a projecting proxy in turn. - -[[projections.interfaces.closed]] -=== Closed Projections - -A projection interface whose accessor methods all match properties of the target aggregate is considered to be a closed projection. The following example (which we used earlier in this chapter, too) is a closed projection: - -.A closed projection -==== -[source, java] ----- -interface NamesOnly { - - String getFirstname(); - String getLastname(); -} ----- -==== - -If you use a closed projection, Spring Data can optimize the query execution, because we know about all the attributes that are needed to back the projection proxy. -For more details on that, see the module-specific part of the reference documentation. - -[[projections.interfaces.open]] -=== Open Projections - -Accessor methods in projection interfaces can also be used to compute new values by using the `@Value` annotation, as shown in the following example: - -[[projections.interfaces.open.simple]] -.An Open Projection -==== -[source, java] ----- -interface NamesOnly { - - @Value("#{target.firstname + ' ' + target.lastname}") - String getFullName(); - … -} ----- -==== - -The aggregate root backing the projection is available in the `target` variable. -A projection interface using `@Value` is an open projection. -Spring Data cannot apply query execution optimizations in this case, because the SpEL expression could use any attribute of the aggregate root. - -The expressions used in `@Value` should not be too complex -- you want to avoid programming in `String` variables. -For very simple expressions, one option might be to resort to default methods (introduced in Java 8), as shown in the following example: - -[[projections.interfaces.open.default]] -.A projection interface using a default method for custom logic -==== -[source, java] ----- -interface NamesOnly { - - String getFirstname(); - String getLastname(); - - default String getFullName() { - return getFirstname().concat(" ").concat(getLastname()); - } -} ----- -==== - -This approach requires you to be able to implement logic purely based on the other accessor methods exposed on the projection interface. -A second, more flexible, option is to implement the custom logic in a Spring bean and then invoke that from the SpEL expression, as shown in the following example: - -[[projections.interfaces.open.bean-reference]] -.Sample Person object -==== -[source, java] ----- -@Component -class MyBean { - - String getFullName(Person person) { - … - } -} - -interface NamesOnly { - - @Value("#{@myBean.getFullName(target)}") - String getFullName(); - … -} ----- -==== - -Notice how the SpEL expression refers to `myBean` and invokes the `getFullName(…)` method and forwards the projection target as a method parameter. -Methods backed by SpEL expression evaluation can also use method parameters, which can then be referred to from the expression. -The method parameters are available through an `Object` array named `args`. The following example shows how to get a method parameter from the `args` array: - -.Sample Person object -==== -[source, java] ----- -interface NamesOnly { - - @Value("#{args[0] + ' ' + target.firstname + '!'}") - String getSalutation(String prefix); -} ----- -==== - -Again, for more complex expressions, you should use a Spring bean and let the expression invoke a method, as described <>. - -[[projections.interfaces.nullable-wrappers]] -=== Nullable Wrappers - -Getters in projection interfaces can make use of nullable wrappers for improved null-safety. Currently supported wrapper types are: - -* `java.util.Optional` -* `com.google.common.base.Optional` -* `scala.Option` -* `io.vavr.control.Option` - -.A projection interface using nullable wrappers -==== -[source, java] ----- -interface NamesOnly { - - Optional getFirstname(); -} ----- -==== - -If the underlying projection value is not `null`, then values are returned using the present-representation of the wrapper type. -In case the backing value is `null`, then the getter method returns the empty representation of the used wrapper type. - -[[projections.dtos]] -== Class-based Projections (DTOs) - -Another way of defining projections is by using value type DTOs (Data Transfer Objects) that hold properties for the fields that are supposed to be retrieved. -These DTO types can be used in exactly the same way projection interfaces are used, except that no proxying happens and no nested projections can be applied. - -If the store optimizes the query execution by limiting the fields to be loaded, the fields to be loaded are determined from the parameter names of the constructor that is exposed. - -The following example shows a projecting DTO: - -.A projecting DTO -==== -[source,java] ----- -record NamesOnly(String firstname, String lastname) { -} ----- -==== - -Java Records are ideal to define DTO types since they adhere to value semantics: -All fields are `private final` and ``equals(…)``/``hashCode()``/``toString()`` methods are created automatically. -Alternatively, you can use any class that defines the properties you want to project. - -ifdef::repository-projections-trailing-dto-fragment[] -include::{repository-projections-trailing-dto-fragment}[] -endif::[] - -[[projection.dynamic]] -== Dynamic Projections - -So far, we have used the projection type as the return type or element type of a collection. -However, you might want to select the type to be used at invocation time (which makes it dynamic). -To apply dynamic projections, use a query method such as the one shown in the following example: - -.A repository using a dynamic projection parameter -==== -[source,java,subs="+attributes"] ----- -interface PersonRepository extends Repository { - - Collection findByLastname(String lastname, Class type); -} ----- -==== - -This way, the method can be used to obtain the aggregates as is or with a projection applied, as shown in the following example: - -.Using a repository with dynamic projections -==== -[source,java,subs="+attributes"] ----- -void someMethod(PersonRepository people) { - - Collection aggregates = - people.findByLastname("Matthews", Person.class); - - Collection aggregates = - people.findByLastname("Matthews", NamesOnly.class); -} ----- -==== - -NOTE: Query parameters of type `Class` are inspected whether they qualify as dynamic projection parameter. -If the actual return type of the query equals the generic parameter type of the `Class` parameter, then the matching `Class` parameter is not available for usage within the query or SpEL expressions. -If you want to use a `Class` parameter as query argument then make sure to use a different generic parameter, for example `Class`. diff --git a/src/main/asciidoc/spring-data-commons-docs/repository-query-keywords-reference.adoc b/src/main/asciidoc/spring-data-commons-docs/repository-query-keywords-reference.adoc deleted file mode 100644 index 70a09f0e9..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repository-query-keywords-reference.adoc +++ /dev/null @@ -1,72 +0,0 @@ -[[repository-query-keywords]] -[appendix] -= Repository query keywords - -[[appendix.query.method.subject]] -== Supported query method subject keywords - -The following table lists the subject keywords generally supported by the Spring Data repository query derivation mechanism to express the predicate. -Consult the store-specific documentation for the exact list of supported keywords, because some keywords listed here might not be supported in a particular store. - -.Query subject keywords -[options="header",cols="1,3"] -|=============== -|Keyword | Description -|`find…By`, `read…By`, `get…By`, `query…By`, `search…By`, `stream…By`| General query method returning typically the repository type, a `Collection` or `Streamable` subtype or a result wrapper such as `Page`, `GeoResults` or any other store-specific result wrapper. Can be used as `findBy…`, `findMyDomainTypeBy…` or in combination with additional keywords. -|`exists…By`| Exists projection, returning typically a `boolean` result. -|`count…By`| Count projection returning a numeric result. -|`delete…By`, `remove…By`| Delete query method returning either no result (`void`) or the delete count. -|`…First…`, `…Top…`| Limit the query results to the first `` of results. This keyword can occur in any place of the subject between `find` (and the other keywords) and `by`. -|`…Distinct…`| Use a distinct query to return only unique results. Consult the store-specific documentation whether that feature is supported. This keyword can occur in any place of the subject between `find` (and the other keywords) and `by`. -|=============== - -[[appendix.query.method.predicate]] -== Supported query method predicate keywords and modifiers - -The following table lists the predicate keywords generally supported by the Spring Data repository query derivation mechanism. -However, consult the store-specific documentation for the exact list of supported keywords, because some keywords listed here might not be supported in a particular store. - -.Query predicate keywords -[options="header",cols="1,3"] -|=============== -|Logical keyword|Keyword expressions -|`AND`|`And` -|`OR`|`Or` -|`AFTER`|`After`, `IsAfter` -|`BEFORE`|`Before`, `IsBefore` -|`CONTAINING`|`Containing`, `IsContaining`, `Contains` -|`BETWEEN`|`Between`, `IsBetween` -|`ENDING_WITH`|`EndingWith`, `IsEndingWith`, `EndsWith` -|`EXISTS`|`Exists` -|`FALSE`|`False`, `IsFalse` -|`GREATER_THAN`|`GreaterThan`, `IsGreaterThan` -|`GREATER_THAN_EQUALS`|`GreaterThanEqual`, `IsGreaterThanEqual` -|`IN`|`In`, `IsIn` -|`IS`|`Is`, `Equals`, (or no keyword) -|`IS_EMPTY`|`IsEmpty`, `Empty` -|`IS_NOT_EMPTY`|`IsNotEmpty`, `NotEmpty` -|`IS_NOT_NULL`|`NotNull`, `IsNotNull` -|`IS_NULL`|`Null`, `IsNull` -|`LESS_THAN`|`LessThan`, `IsLessThan` -|`LESS_THAN_EQUAL`|`LessThanEqual`, `IsLessThanEqual` -|`LIKE`|`Like`, `IsLike` -|`NEAR`|`Near`, `IsNear` -|`NOT`|`Not`, `IsNot` -|`NOT_IN`|`NotIn`, `IsNotIn` -|`NOT_LIKE`|`NotLike`, `IsNotLike` -|`REGEX`|`Regex`, `MatchesRegex`, `Matches` -|`STARTING_WITH`|`StartingWith`, `IsStartingWith`, `StartsWith` -|`TRUE`|`True`, `IsTrue` -|`WITHIN`|`Within`, `IsWithin` -|=============== - -In addition to filter predicates, the following list of modifiers is supported: - -.Query predicate modifier keywords -[options="header",cols="1,3"] -|=============== -|Keyword | Description -|`IgnoreCase`, `IgnoringCase`| Used with a predicate keyword for case-insensitive comparison. -|`AllIgnoreCase`, `AllIgnoringCase`| Ignore case for all suitable properties. Used somewhere in the query method predicate. -|`OrderBy…`| Specify a static sorting order followed by the property path and direction (e. g. `OrderByFirstnameAscLastnameDesc`). -|=============== diff --git a/src/main/asciidoc/spring-data-commons-docs/repository-query-return-types-reference.adoc b/src/main/asciidoc/spring-data-commons-docs/repository-query-return-types-reference.adoc deleted file mode 100644 index a64027e21..000000000 --- a/src/main/asciidoc/spring-data-commons-docs/repository-query-return-types-reference.adoc +++ /dev/null @@ -1,43 +0,0 @@ -[appendix] -[[repository-query-return-types]] -= Repository query return types - -[[appendix.query.return.types]] -== Supported Query Return Types - -The following table lists the return types generally supported by Spring Data repositories. -However, consult the store-specific documentation for the exact list of supported return types, because some types listed here might not be supported in a particular store. - -NOTE: Geospatial types (such as `GeoResult`, `GeoResults`, and `GeoPage`) are available only for data stores that support geospatial queries. -Some store modules may define their own result wrapper types. - -.Query return types -[options="header",cols="1,3"] -|=============== -|Return type|Description -|`void`|Denotes no return value. -|Primitives|Java primitives. -|Wrapper types|Java wrapper types. -|`T`|A unique entity. Expects the query method to return one result at most. If no result is found, `null` is returned. More than one result triggers an `IncorrectResultSizeDataAccessException`. -|`Iterator`|An `Iterator`. -|`Collection`|A `Collection`. -|`List`|A `List`. -|`Optional`|A Java 8 or Guava `Optional`. Expects the query method to return one result at most. If no result is found, `Optional.empty()` or `Optional.absent()` is returned. More than one result triggers an `IncorrectResultSizeDataAccessException`. -|`Option`|Either a Scala or Vavr `Option` type. Semantically the same behavior as Java 8's `Optional`, described earlier. -|`Stream`|A Java 8 `Stream`. -|`Streamable`|A convenience extension of `Iterable` that directy exposes methods to stream, map and filter results, concatenate them etc. -|Types that implement `Streamable` and take a `Streamable` constructor or factory method argument|Types that expose a constructor or `….of(…)`/`….valueOf(…)` factory method taking a `Streamable` as argument. See <> for details. -|Vavr `Seq`, `List`, `Map`, `Set`|Vavr collection types. See <> for details. -|`Future`|A `Future`. Expects a method to be annotated with `@Async` and requires Spring's asynchronous method execution capability to be enabled. -|`CompletableFuture`|A Java 8 `CompletableFuture`. Expects a method to be annotated with `@Async` and requires Spring's asynchronous method execution capability to be enabled. -|`Slice`|A sized chunk of data with an indication of whether there is more data available. Requires a `Pageable` method parameter. -|`Page`|A `Slice` with additional information, such as the total number of results. Requires a `Pageable` method parameter. -|`GeoResult`|A result entry with additional information, such as the distance to a reference location. -|`GeoResults`|A list of `GeoResult` with additional information, such as the average distance to a reference location. -|`GeoPage`|A `Page` with `GeoResult`, such as the average distance to a reference location. -|`Mono`|A Project Reactor `Mono` emitting zero or one element using reactive repositories. Expects the query method to return one result at most. If no result is found, `Mono.empty()` is returned. More than one result triggers an `IncorrectResultSizeDataAccessException`. -|`Flux`|A Project Reactor `Flux` emitting zero, one, or many elements using reactive repositories. Queries returning `Flux` can emit also an infinite number of elements. -|`Single`|A RxJava `Single` emitting a single element using reactive repositories. Expects the query method to return one result at most. If no result is found, `Mono.empty()` is returned. More than one result triggers an `IncorrectResultSizeDataAccessException`. -|`Maybe`|A RxJava `Maybe` emitting zero or one element using reactive repositories. Expects the query method to return one result at most. If no result is found, `Mono.empty()` is returned. More than one result triggers an `IncorrectResultSizeDataAccessException`. -|`Flowable`|A RxJava `Flowable` emitting zero, one, or many elements using reactive repositories. Queries returning `Flowable` can emit also an infinite number of elements. -|=============== diff --git a/src/test/java/org/springframework/data/aerospike/config/BlockingTestConfig.java b/src/test/java/org/springframework/data/aerospike/config/BlockingTestConfig.java index 3b811a79a..72b9787ad 100644 --- a/src/test/java/org/springframework/data/aerospike/config/BlockingTestConfig.java +++ b/src/test/java/org/springframework/data/aerospike/config/BlockingTestConfig.java @@ -20,7 +20,6 @@ import org.springframework.transaction.support.TransactionTemplate; import org.testcontainers.containers.GenericContainer; -import java.util.Arrays; import java.util.List; /** @@ -36,7 +35,7 @@ public class BlockingTestConfig extends AbstractAerospikeDataConfiguration { @Override protected List customConverters() { - return Arrays.asList( + return List.of( SampleClasses.CompositeKey.CompositeKeyToStringConverter.INSTANCE, SampleClasses.CompositeKey.StringToCompositeKeyConverter.INSTANCE ); diff --git a/src/test/java/org/springframework/data/aerospike/config/ReactiveTestConfig.java b/src/test/java/org/springframework/data/aerospike/config/ReactiveTestConfig.java index 70fb3a743..4bde9925e 100644 --- a/src/test/java/org/springframework/data/aerospike/config/ReactiveTestConfig.java +++ b/src/test/java/org/springframework/data/aerospike/config/ReactiveTestConfig.java @@ -21,7 +21,6 @@ import org.springframework.transaction.support.DefaultTransactionDefinition; import org.testcontainers.containers.GenericContainer; -import java.util.Arrays; import java.util.List; /** @@ -37,7 +36,7 @@ public class ReactiveTestConfig extends AbstractReactiveAerospikeDataConfigurati @Override protected List customConverters() { - return Arrays.asList( + return List.of( SampleClasses.CompositeKey.CompositeKeyToStringConverter.INSTANCE, SampleClasses.CompositeKey.StringToCompositeKeyConverter.INSTANCE );