id | title | sidebar_label |
---|---|---|
command-reference |
Command Reference |
Command Reference |
Find a comprehensive description of each command available from the LiveData Migrator action prompt here. Review the LiveData Migrator Operation guide for information on how to use the commands to perform migration.
Each command description below includes the information available from the action prompt using the help
command. Tab-completion will also give you guidance when entering commands on the available options and help auto-complete the needed values.
:::info Should you need clarification on any configuration items shown in the UI, you will also find them here (located with the equivalent CLI mandatory and optional parameters). :::
Clear all information that LiveData Migrator maintains about the source file system by issuing the source clear
command. This will allow you to define an alternative source to one previously defined or detected automatically.
SYNOPSYS
source clear
Use source del
to delete information about a specific source file system by identifier. You can obtain the identifier for a source file system with the output of the source fs show
command.
SYNOPSYS
source del [--file-system-id] string
OPTIONS
--file-system-id string
[Mandatory]
--file-system-id
The identifier of the source file system resource to delete. This is referenced in the UI as Storage Name.
source del --file-system-id auto-discovered-source-hdfs
Get information about the source file system configuration.
SYNOPSYS
source fs show [--verbose]
OPTIONS
--verbose
[Optional, default = false]
--verbose
Include all configuration properties for the source file system in the response.
Add an Azure Data Lake Storage Gen 2 container as a migration target using the filesystem add adls2 oauth
command, which requires a service principal and OAuth 2 credentials.
:::note The service principal that you want to use must have the Storage Blob Data Owner role assigned to the ADLS Gen2 storage account. See the Microsoft documentation for further guidance. :::
SYNOPSYS
filesystem add adls2 oauth [--file-system-id] string
[--storage-account-name] string
[--oauth2-client-id] string
[--oauth2-client-secret] string
[--oauth2-client-endpoint] string
[--container-name] string
[--insecure]
[[--properties-files] list]
[[--properties] string]
OPTIONS
--file-system-id string
[Mandatory]
--storage-account-name string
[Mandatory]
--oauth2-client-id string
[Mandatory]
--oauth2-client-secret string
[Mandatory]
--oauth2-client-endpoint string
[Mandatory]
--container-name string
[Mandatory]
--insecure
[Optional, default = false]
--properties-files list
Load properties from this file
[Optional, default = <none>]
--properties string
Override properties in comma separated key/value string e.g. --properties property-one=value-one,\"property-two=value-one,value-two\"
[Optional, default = <nothing>]
--file-system-id
The identifier to give the new file system resource. This is referenced in the UI as Storage Name.--storage-account-name
The name of the ADLS Gen 2 storage account to target. This is referenced in the UI as Account Name.--oauth2-client-id
The client ID (also known as application ID) for your Azure service principal. This is referenced in the UI as Client ID.--oauth2-client-secret
The client secret (also known as application secret) for the Azure service principal. This is referenced in the UI as Secret.--oauth2-client-endpoint
The client endpoint for the Azure service principal. This is referenced in the UI as Endpoint.
This will often take the form ofhttps://login.microsoftonline.com/{tenant}/oauth2/v2.0/token
where{tenant}
is the directory ID for the Azure service principal. You can specify a custom URL if desired (such as a proxy endpoint that manually interfaces with Azure Active Directory).--container.name
The name of the container in the storage account to which content will be migrated. This is referenced in the UI as Container Name.
--insecure
When provided, LiveData Migrator will not use TLS to encrypt communication with ADLS Gen 2. This may improve throughput, but should only be used when you have other means of securing communication. This is referenced in the UI when Use Secure Protocol is unchecked.--properties-files
Reference a list of existing properties files, each that contains Hadoop configuration properties in the format used bycore-site.xml
orhdfs-site.xml
.--properties
Specify properties to use in a comma-separated key/value list.
filesystem add adls2 oauth --file-system-id mytarget --storage-account-name myadls2 --oauth2-client-id b67f67ex-ampl-e2eb-bd6d-client9385id --oauth2-client-secret 2IPO8*secretk-9OPs8n*TexampleHJ= --oauth2-client-endpoint https://login.microsoftonline.com/78u098ex-ampl-e498-8bce-ndpoint5f2e5/oauth2/v2.0/token --container.name lm2target
Add an Azure Data Lake Storage Gen 2 container as a migration target using the filesystem add adls2 sharedKey
command, which requires credentials in the form of an account key.
SYNOPSYS
filesystem add adls2 sharedKey [--file-system-id] string
[--storage-account-name] string
[--shared-key] string
[--container-name] string
[--insecure]
[[--properties-files] list]
[[--properties] string]
OPTIONS
--file-system-id string
[Mandatory]
--storage-account-name string
[Mandatory]
--shared-key string
[Mandatory]
--container-name string
[Mandatory]
--insecure
[Optional, default = false]
--properties-files list
Load properties from these files
[Optional, default = <nothing>]
--properties string
Override properties in comma separated key/value string e.g. --properties property-one=value-one,\"property-two=value-one,value-two\"
[Optional, default = <nothing>]
--file-system-id
The identifier to give the new file system resource. This is referenced in the UI as Storage Name.--storage-account-name
The name of the ADLS Gen 2 storage account to target. This is referenced in the UI as Account Name.--shared.key
The shared account key to use as credentials to write to the storage account. This is referenced in the UI as Access Key.--container.name
The name of the container in the storage account to which content will be migrated. This is referenced in the UI as Container Name.
--insecure
When provided, LiveData Migrator will not use TLS to encrypt communication with ADLS Gen 2. This may improve throughput, but should only be used when you have other means of securing communication. This is referenced in the UI when Use Secure Protocol is unchecked.--properties-files
Reference a list of existing properties files, each that contains Hadoop configuration properties in the format used bycore-site.xml
orhdfs-site.xml
.--properties
Specify properties to use in a comma-separated key/value list.
filesystem add adls2 sharedKey --file-system-id mytarget --storage-account-name myadls2 --container.name lm2target --shared.key Yi8NxHGqoQ79DBGLVn+COK/sRDwbNqAEXAMPLEDaMxRkvXt2ijUtASHAREDj/vaS/NbzR5rtjEKEY31eIopUVA==
Add a Google Cloud Storage as a migration target using the filesystem add gcs
command, which requires credentials in the form of an account key file.
SYNOPSYS
filesystem add gcs [--file-system-id] string
[[--service-account-json-key-file] string]
[[--service-account-p12-key-file] string]
[[--service-account-json-key-file-server-location] string]
[[--service-account-p12-key-file-server-location] string]
[[--service-account-email] string]
[--bucket-name] string
[[--properties-files] list]
[[--properties] string]
OPTIONS
--file-system-id string
[Mandatory]
--service-account-json-key-file string
[Optional, default = <none>]
--service-account-p12-key-file string
[Optional, default = <none>]
--service-account-json-key-file-server-location string
Permanent location of the GCS KeyFile on the LiveData Migrator server
[Optional, default = <none>]
--service-account-p12-key-file-server-location string
Permanent location of the GCS KeyFile on the LiveData Migrator server
[Optional, default = <none>]
--service-account-email string
GCS Service Account Email
[Optional, default = <none>]
--bucket-name string
[Mandatory]
--properties-files list
Load properties from these files
[Optional, default = <nothing>]
--properties string
Override properties in comma separated key/value string e.g. --properties property-one=value-one,\"property-two=value-one,value-two\"
[Optional, default = <nothing>]
--file-system-id
The identifier to give the new file system resource. This is referenced in the UI as Storage Name.--bucket-name
The bucket name of a Google Cloud Storage account. This is referenced in the UI as Bucket Name.
:::info Provide your service account key for the GCS bucket by choosing one of the parameters below.
You can also upload the service account key directly when using the UI (this is not supported through the CLI). :::
--service-account-json-key-file-server-location
The absolute filesystem path on the LiveData Migrator server of your service account key file in JSON format. You can either create a GCS service account key or use an existing one.
This is referenced in the UI as Key File when the Key File Options -> Provide a Path option is selected.--service-account-p12-key-file-server-location
The absolute filesystem path on the LiveData Migrator server of your service account key file in P12 format. You can either create a GCS service account key or use an existing one.
This is referenced in the UI as Key File when the Key File Options -> Provide a Path option is selected.--service-account-json-key-file
The absolute filesystem path on the host running the LiveData Migrator CLI of your service account key file in JSON format. Only use this parameter if you are running the LiveData Migrator CLI on a different host to your LiveData Migrator server.--service-account-p12-key-file
The absolute filesystem path on the host running the LiveData Migrator CLI of your service account key file in P12 format. Only use this parameter if you are running the LiveData Migrator CLI on a different host to your LiveData Migrator server.
--service-account-email
The email address linked to your GCS service account. This is referenced in the UI as Email address and is required when selecting the Upload P12 Key File option.--properties-files
Reference a list of existing properties files, each that contains Hadoop configuration properties in the format used bycore-site.xml
orhdfs-site.xml
.--properties
Specify properties to use in a comma-separated key/value list.
filesystem add gcs --file-system-id gcsAgent --bucket-name myGcsBucket --service-account-p12-key-file-server-location /user/hdfs/targetStorage/myAccountKey.p12 --service-account-email [email protected]
Add a Hadoop Distributed File System as either a migration source or target using the filesystem add hdfs
command.
Creating an HDFS file system resource with this command will normally only be used when migrating to a target HDFS (rather than another storage service like ADLS Gen 2 or S3a). LiveData Migrator will attempt to auto-detect the source HDFS file system when started from the command line unless Kerberos is enabled on your source environment.
If Kerberos is enabled on your source environment, use the filesystem auto-discover-source hdfs
command to provide Kerberos credentials and auto-discover your source HDFS configuration.
SYNOPSYS
filesystem add hdfs [--file-system-id] string
[[--default-fs] string]
[[--user] string]
[[--kerberos-principal] string]
[[--kerberos-keytab] string]
[--source]
[[--properties-files] list]
[[--properties] string]
OPTIONS
--file-system-id string
Name of the filesystem
[Mandatory]
--default-fs string
[Optional, default = <nothing>]
--user string
FileSystem username to perform migration actions as
[Optional, default = <nothing>]
--kerberos-principal string
Kerberos principal to authenticate with and perform migration actions as
[Optional, default = <none>]
--kerberos-keytab string
Kerberos keytab to use when authenticating the provided kerberos principal
[Optional, default = <none>]
--source Add this filesystem as the source for migrations
[Optional, default = false]
--properties-files list
Load properties from these files
[Optional, default = <nothing>]
--properties string
Override properties in comma separated key/value string e.g. --properties property-one=value-one,\"property-two=value-one,value-two\"
[Optional, default = <nothing>]
--file-system-id
The identifier to give the new file system resource. This is referenced in the UI as Storage Name.--fs.defaultFS
A string that defines how LiveData Migrator accesses HDFS. This is referenced in the UI as Default FS.
It can be specified in a number of forms:- As a single HDFS URI, such as
hdfs://192.168.1.10:8020
(using an IP address) orhdfs://myhost.localdomain:8020
(using a hostname). - As a comma-separated list of HDFS URIs, like
hdfs://nn1.localdomain:8020,hdfs://nn2.localdomain:8020
to allow for integration with HA-enabled Hadoop environments. - As an HDFS URI that references a nameservice ID defined in the cluster properties, like
hdfs://mynameservice
, where there is a configuration property for the cluster that defines the value of thedfs.nameservices
value to include that nameservice ID, likemynameservice
and all required configuration properties for that nameservice, likedfs.ha.namenodes.mynameservice
,dfs.namenode.rpc-address.mynameservice.nn1
, anddfs.namenode.http-address.mynameservice.nn1
, etc.
- As a single HDFS URI, such as
--user
The name of the HDFS user to be used when performing operations against the file system. In environments where Kerberos is disabled, this user must be the HDFS super user, such ashdfs
.--kerberos-principal
The Kerberos principal to authenticate with and perform migrations as. This principal should map to the HDFS super user using auth_to_local rules.--kerberos-keytab
The Kerberos keytab containing the principal defined for the--kerberos-principal
parameter. This must be accessible to the local system user running the LiveData Migrator service (default ishdfs
).--source
Provide this parameter to use the file system resource created as a source. This is referenced in the UI when configuring the Unknown source.--properties-files
Reference a list of existing properties files, each that contains Hadoop configuration properties in the format used bycore-site.xml
orhdfs-site.xml
.--properties
Specify properties to use in a comma-separated key/value list.
filesystem add hdfs --file-system-id mysource --source --fs.defaultFS hdfs://sourcenameservice --properties-files /etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
filesystem add hdfs --file-system-id mysource --source --fs.defaultFS hdfs://sourcenameservice --properties-files /etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml --kerberos-keytab /etc/security/keytabs/hdfs.headless.keytab --kerberos-principal [email protected]
:::note When specifying a HDFS filesystem as a target, the property files (and Kerberos keytab) for the target cluster must exist on the local filesystem and be accessible to the LiveData Migrator system user. :::
filesystem add hdfs --file-system-id mytarget --fs.defaultFS hdfs://targetnameservice --properties-files /etc/targetClusterConfig/core-site.xml,/etc/targetClusterConfig/hdfs-site.xml --kerberos-keytab /etc/targetClusterKeytabs/hdfs.headless.keytab --kerberos-principal [email protected]
filesystem add hdfs --file-system-id mytarget --fs.defaultFS hdfs://namenode.targetdomain:8020 --user hdfs
Add an S3 bucket as a target file system using the filesystem add s3a
command. This method also supports IBM COS buckets.
SYNOPSYS
filesystem add s3a [--file-system-id] string
[--bucket-name] string
[[--access-key] string]
[[--secret-key] string]
[--credentials-provider] string
[[--properties-files] list]
[[--properties] list]
OPTIONS
--file-system-id string
[Mandatory]
--bucket-name string
[Mandatory]
--access-key string
[Optional, default = <none>]
--secret-key string
[Optional, default = <none>]
--credentials-provider string
[Optional, default = <none>]
--properties-files list
Load properties from these files
[Optional, default = <none>]
--properties string
Override properties in comma separated key/value string e.g. --properties property-one=value-one,\"property-two=value-one,value-two\"
[Optional, default = <nothing>]
--file-system-id
The identifier for the new file system resource. This is referenced in the UI as Storage Name.--bucket-name
The name of your S3 bucket. This is referenced in the UI as Bucket Name.--credentials-provider
The Java class name of a credentials provider for authenticating with the S3 endpoint. This is referenced in the UI as Credentials Provider. This is not a required parameter when adding an IBM COS bucket through the UI.
The Provider options available include:-
org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
Use this provider to offer credentials as an access key and secret access key with the
--access-key
and--secret-key
Parameters. -
com.amazonaws.auth.InstanceProfileCredentialsProvider
Use this provider when running LiveData Migrator on an EC2 instance that has been assigned an IAM role with policies that allow it to access the S3 bucket.
-
com.amazonaws.auth.DefaultAWSCredentialsProviderChain
A commonly-used credentials provider chain that looks for credentials in this order:
- Environment Variables -
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
, orAWS_ACCESS_KEY
andAWS_SECRET_KEY
. - Java System Properties -
aws.accessKeyId
andaws.secretKey
. - Web Identity Token credentials from the environment or container.
- Credential profiles file at the default location (
~/.aws/credentials
) shared by all AWS SDKs and the AWS CLI. - Credentials delivered through the Amazon EC2 container service if
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
environment variable is set and security manager has permission to access the variable. - Instance profile credentials delivered through the Amazon EC2 metadata service.
- Environment Variables -
-
- Endpoint (UI & IBM COS only): This is required when adding an IBM COS bucket. IBM provide a list of available endpoints that can be found in their public documentation.
--access-key
When using theorg.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
credentials provider, specify the access key with this parameter. This is referenced in the UI as Access Key. This is a required parameter when adding an IBM COS bucket.--secret-key
When using theorg.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
credentials provider, specify the secret key using this parameter. This is referenced in the UI as Secret Key. This is a required parameter when adding an IBM COS bucket.--properties-files
Reference a list of existing properties files, each that contains Hadoop configuration properties in the format used bycore-site.xml
orhdfs-site.xml
.--properties
Specify properties to use in a comma-separated key/value list.
:::info
When adding properties via the UI or API, for example to set a custom fs.s3a.endpoint
, it is required to also set the following properties manually. They are added by default when using the CLI.
:::
fs.s3a.impl
(defaultorg.apache.hadoop.fs.s3a.S3AFileSystem
): The implementation class of the S3A Filesystem.fs.AbstractFileSystem.s3a.impl
(defaultorg.apache.hadoop.fs.s3a.S3A
): The implementation class of the S3A AbstractFileSystem.fs.s3a.user.agent.prefix
(defaultWANdisco/LiveDataMigrator
): Sets a custom value that will be pre-pended to the User-Agent header sent in HTTP requests to the S3 back-end by S3AFileSystem.fs.s3a.impl.disable.cache
(defaulttrue
): Disables the S3 file system cache when set to 'true'.fs.hadoop.tmp.dir
(defaulttmp
): The parent directory for other temporary directories.
filesystem add s3a --file-system-id mytarget --bucket-name mybucket1 --credentials-provider org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider --access-key B6ZAI18Z3UIO002Y777A --secret-key OP87Chokisf4hsTP0Q5j95yI904lT7AaDBGJpp0D
Discover your local HDFS filesystem by specifying the Kerberos credentials for your source environment.
You can also manually configure the source HDFS filesystem using the filesystem add hdfs
command.
SYNOPSYS
filesystem auto-discover-source hdfs [[--kerberos-principal] string]
[[--kerberos-keytab] string]
OPTIONS
--kerberos-principal string
Kerberos principal to authenticate with and perform migration actions as
[Optional, default = <none>]
--kerberos-keytab string
Kerberos keytab to use when authenticating the provided kerberos principal
[Optional, default = <none>]
--kerberos-principal
The Kerberos principal to authenticate with and perform migrations as. This principal should map to the HDFS super user using auth_to_local rules.--kerberos-keytab
The Kerberos keytab containing the principal defined for the--kerberos-principal
parameter. This must be accessible to the local system user running the LiveData Migrator service (default ishdfs
).
filesystem auto-discover-source hdfs --kerberos-keytab /etc/security/keytabs/hdfs.headless.keytab --kerberos-principal [email protected]
Delete all target file system references with the filesystem clear
. This leaves any migrated content intact in those targets, but removes all resources that act as references to the target file systems.
NAME
filesystem clear - Delete all targets.
SYNOPSYS
filesystem clear
Delete a specific file system resource by identifier. This leaves all migrated content intact at that target, but removes the resource that acts as a reference to that file system.
SYNOPSYS
filesystem del [--file-system-id] string
OPTIONS
--file-system-id string
[Mandatory]
--file-system-id
The identifier of the file system resource to delete. This is referenced in the UI as Storage Name.
filesystem del --file-system-id mytarget
List defined file system resources.
SYNOPSYS
filesystem list [--verbose]
OPTIONS
--verbose
[Optional, default = false]
--verbose
Include all properties for each file system in the JSON result.
View details for a file system resource.
SYNOPSYS
filesystem show [--file-system-id] string [--verbose]
OPTIONS
--file-system-id string
[Mandatory]
--verbose
[Optional, default = false]
--file-system-id
The identifier of the file system resource to show. This is referenced in the UI as Storage Name.
filesystem show --file-system-id mytarget
View information about the file system types available for use with LiveData Migrator. File systems that provide an eventListenerType
other than no-op
can be used in migrations that will migrate ongoing changes during operation.
SYNOPSYS
filesystem types
Create a date-based exclusion that checks the 'modified date' of any directory or file that the LiveData Migrator encounters during a migration to which the exclusion has been applied. If the path or file being examined by LiveData Migrator has a 'modified date' earlier than the specified date, it will be excluded from the migration.
Once associated with a migration using migration exclusion add
, files that match the policy will not be migrated.
SYNOPSYS
exclusion add date [--exclusion-id] string
[--description] string
[--before-date] string
OPTIONS
--exclusion-id string
[Mandatory]
--description string
[Mandatory]
--before-date string
Expects the ISO format date-time, with an offset, e.g. "2011-12-03T10:15:30+01:00"
[Mandatory]
--exclusion-id
The identifier for the exclusion policy. This is referenced in the UI as Name.--description
A user-friendly description for the policy. This is referenced in the UI as Description.--before-date
An ISO formatted date and time, which can include an offset for a particular time zone. This is referenced in the UI as TBA.
exclusion add date --exclusion-id beforeDate --description "Files earlier than 2020-10-01T10:00:00PDT" --before-date 2020-10-01T10:00:00-07:00
Create an exclusion that can be applied to migrations to constrain the files transferred by a policy based on file size. Once associated with a migration using migration exclusion add
, files that match the policy will not be migrated.
SYNOPSYS
exclusion add file-size [--exclusion-id] string
[--description] string
[--value] long
[--unit] string
OPTIONS
--exclusion-id string
[Mandatory]
--description string
[Mandatory]
--value long
[Mandatory]
--unit string
[Mandatory]
--exclusion-id
The identifier for the exclusion policy. This is referenced in the UI as Name.--description
A user-friendly description for the policy. This is referenced in the UI as Description.--value
The numerical value for the file size, in a unit defined by. This is referenced in the UI as Value.--unit
A string to define the unit used, eitherB
for bytes,GB
for gibibytes,KB
for kibibytes,MB
for mebibytes,PB
for pebibytes, orTB
for tebibytes.
exclusion add file-size --exclusion-id 100mbfiles --description "Files greater than 100 MB" --value 100 --unit MB
Create an exclusion that can be applied to migrations to constrain the files transferred by a policy based on matching file name by regular expression. Once associated with a migration using migration exclusion add
, files that match the policy will not be migrated.
SYNOPSYS
exclusion add regex [--exclusion-id] string
[--description] string
[--regex] string
OPTIONS
--exclusion-id string
[Mandatory]
--description string
[Mandatory]
--regex string
[Mandatory]
--exclusion-id
The identifier for the exclusion policy. This is referenced in the UI as Name.--description
A user-friendly description for the policy. This is referenced in the UI as Description.--regex
A regular expression in a syntax similar to that used by Perl. This is referenced in the UI as Regex.
exclusion add regex --description "No paths that start with test" --exclusion-id exclusion1 --regex ^test\.*
If you wish to use a \
character as part of your regex value, you must escape this character with an additional backslash.
exclusion add regex --description "No paths that start with a backslash followed by test" --exclusion-id exclusion2 --regex ^\\test\.*
The response displayed if running through the CLI will not hide the additional backslash. However, the internal representation will be as expected within LiveData Migrator (it will read as ^\test.*
).
This workaround is not required for API inputs, as it only affects the Spring Shell implementation used for the CLI.
Delete an exclusion policy so that it is no longer available for migrations.
NAME
exclusion del - Delete an exclusion rule.
SYNOPSYS
exclusion del [--exclusion-id] string
OPTIONS
--exclusion-id string
[Mandatory]
--exclusion-id
The identifier for the exclusion policy to delete. This is referenced in the UI as Name.
exclusion del --exclusion-id exclusion1
List all exclusion policies defined.
NAME
exclusion list - List all exclusion rules.
SYNOPSYS
exclusion list
Get details for an individual exclusion policy by identifier.
SYNOPSYS
exclusion show [--exclusion-id] string
OPTIONS
--exclusion-id string
[Mandatory]
--exclusion-id
The identifier for the exclusion policy to show. This is referenced in the UI as Name.
exclusion show --exclusion-id 100mbfiles
Stop a migration from transferring content to its target, placing it into the STOPPED
state. Stopped migrations can be resumed.
SYNOPSYS
migration stop [--migration-id] string
OPTIONS
--migration-id string
[Mandatory]
--migration-id
The identifier of the migration to stop.
migration stop --migration-id 4ffa620b6ebb0cd34f2c591220d93830f91ccc7e
Resume a migration that you've stopped from transferring content to its target.
SYNOPSYS
migration resume [--migration-id] string
OPTIONS
--migration-id string
[Mandatory]
--migration-id
The identifier of the migration to resume.
migration resume --migration-id 4ffa620b6ebb0cd34f2c591220d93830f91ccc7e
Delete a stopped migration resource.
SYNOPSYS
migration del [--migration-id] string
OPTIONS
--migration-id string
[Mandatory]
--migration-id
The identifier of the migration to delete.
migration del --migration-id 4ffa620b6ebb0cd34f2c591220d93830f91ccc7e
Associate an exclusion resource with a migration so that the exclusion policy applies to items processed for the migration. Exclusions must be associated with a migration before they take effect.
SYNOPSYS
migration exclusion add [--migration-id] string
[--exclusion-id] string
OPTIONS
--migration-id string
[Mandatory]
--exclusion-id string
[Mandatory]
--migration-id
The identifier of the migration with which to associate the exclusion.--exclusion-id
The identifier of the exclusion to associate with the migration. This is referenced in the UI as Name.
migration exclusion add --migration-id 4ffa620b6ebb0cd34f2c591220d93830f91ccc7e --exclusion-id myexclusion1
Remove an exclusion from association with a migration so that its policy no longer applies to items processed for the migration.
SYNOPSYS
migration exclusion del [--migration-id] string
[--exclusion-id] string
OPTIONS
--migration-id string
[Mandatory]
--exclusion-id string
[Mandatory]
--migration-id
The identifier of the migration from which to remove the exclusion.--exclusion-id
The identifier of the exclusion to remove from the migration. This is referenced in the UI as Name.
migration exclusion del --migration-id 4ffa620b6ebb0cd34f2c591220d93830f91ccc7e --exclusion-id myexclusion1
Present the list of all migrations defined.
SYNOPSYS
migration list
Create a new migration to initiate data migration from your source file system.
SYNOPSYS
migration add [[--migration-id] string]
[--path] string
[--target] string
[[--exclusions] string]
[[--action-policy] string]
[--auto-start]
OPTIONS
--migration-id string
[Optional, default = <nothing>]
--path string
[Mandatory]
--target string
[Mandatory]
--exclusions string
[Optional, default = <none>]
--action-policy string
[Optional, default = com.wandisco.livemigrator2.migration.OverwriteActionPolicy]
--auto-start
[Optional, default = false]
--path
Defines the source file system directory that is the scope of the migration. All content (other than that excluded) will be migrated to the target. This is referenced in the UI as Path for {source-filesystem}.--target
Specifies the name of the target file system resource to which migration will occur. This is referenced in the UI as Target.
--migration-id
Provide an identifier for the new migration. An identifier will be auto-generated if one is not provided.--exclusions
A comma-separated list of exclusions by name. This is referenced in the UI as Add new exclusion.--auto-start
Provide this parameter if you want the migration to start immediately. If not provided, the migration will only take effect once run. This is referenced in the UI as Auto-start migration.--action-policy
This parameter determines what happens if the migration encounters content in the target path with the same name and size. This is referenced in the UI as Skip Or Overwrite Settings.
There are two options available:com.wandisco.livemigrator2.migration.OverwriteActionPolicy
(default policy)
Every file is replaced, even if file size is identical on the target storage. This is referenced in the UI as Overwrite.com.wandisco.livemigrator2.migration.SkipIfSizeMatchActionPolicy
If the file size is identical between the source and target, the file is skipped. If it’s a different size, the whole file is replaced. This is referenced in the UI as Skip if Size Match.
migration add --path /repl1 --target mytarget –-migration-id myNewMigration --exclusions 100mbfiles
Add a pending region manually to re-scan a directory if it has been updated.
SYNOPSYS
migration pending-region add
--migration-id
The identifier of the migration to show.--path
The path to the pending region you want to add.
migration pending-region add --migration-id myFirstMigration --path /dir1/userA
Start a migration that was created without the --auto-start
parameter.
SYNOPSYS
migration run [--migration-id] string
OPTIONS
--migration-id string
[Mandatory]
--migration-id
The identifier of the migration to run.
migration run –-migration-id myNewMigration
Provide a JSON description of a specific migration.
NAME
migration show - Get migration details.
SYNOPSYS
migration show [--migration-id] string
OPTIONS
--migration-id string
[Mandatory]
--migration-id
The identifier of the migration to show.
migration show --migration-id myNewMigration
Get a text description of the overall status of migrations. Information is provided on the following:
- Total number of migrations defined.
- Average bandwidth being used over 10s, 60s, and 300s intervals.
- Peak bandwidth observed over 300s interval.
- Average file transfer rate per second over 10s, 60s, and 300s intervals.
- Peak file transfer rate per second over a 300s interval.
- List of live migrations with source path and migration id.
- List of running migrations with source path and migration id.
- List of non-running migrations source source path and migration id.
NAME
status - Get migration status.
SYNOPSYS
status
status
Total Migrations: 1
Average Bandwidth: 0.00 Gb/s, 0.00 Gb/s, 0.00 Gb/s
Peak Bandwidth: 0.00 Gb/s
Average Files/s: 0, 0, 0
Peak Files/s: 0
Live: 0
Running: 0
Ready: 1
/repl1 5c7271676c8f858ad11011bfa155fc8e43b8fe32
Delete the current bandwidth policy and revert back to the default policy (unlimited bandwidth).
SYNOPSYS
bandwidth policy del
Set the bandwidth policy that will determine how much bandwidth LiveData Migrator can use.
If no policy is defined, the default policy is unlimited bandwidth.
SYNOPSYS
bandwidth policy set [--value] long [--unit] string
OPTIONS
--value long
[Mandatory]
--unit string
[Mandatory]
--value
Define the number of byte units.--unit
Define the byte unit to be used.
Decimal units:KB
,MB
,GB
,TB
,PB
,EB
,ZB
,YB
Binary units:KiB
,MiB
,GiB
,TiB
,PiB
,EiB
,ZiB
,YiB
bandwidth policy set --value 10 --unit MB
bandwidth policy set --value 10 --unit MiB
Display the current bandwidth policy.
SYNOPSYS
bandwidth policy show
Add an Azure hive agent to connect to an Azure SQL database using the hive agent add azure
command.
:::info The Azure hive agent requires a ADLS Gen2 storage account and container name, this is only for the purposes of generating the correct location for the database. The container will not be accessed by the Hive agent and no data will be written to the container. :::
SYNOPSYS
hive agent add azure [[--name] string] [--db-server-name] string [--database-name] string [--database-user] string [--database-password] string [--storage-account] string [--container-name] string [[--root-folder] string] [[--hdi-version] string] [[--insecure] boolean]
OPTIONS
--name string
name of the agent
[Optional, default = <nothing>]
--db-server-name string
Azure SQL database server name
[Mandatory]
--database-name string
Azure SQL database name
[Mandatory]
--database-user string
Azure SQL database user
[Mandatory]
--database-password string
Azure SQL database password
[Mandatory]
--storage-account string
Azure storage account name
[Mandatory]
--container-name string
Azure storage account container name
[Mandatory]
--root-folder string
Azure root folder
[Optional, default = <nothing>]
--hdi-version string
Azure HDI version (3.6 or 4.0)
[Optional, default = <nothing>]
--insecure boolean
use insecure connection to Azure
[Optional, default = <nothing>]
--db-server-name
The Azure SQL database server name. Only the name given to the server is required, the.database.windows.net
suffix should be omitted.--database-name
The Azure SQL database name.--database-user
The user name to access the database.--database-password
The user password to access the database.--storage-account
The name of the ADLS Gen 2 storage account.--container-name
The name of the container in the ADLS Gen2 storage account.
--name
The identifier to give to the new Hive agent.--root-folder
The root directory for the Azure database.--hdi-version
The HDI version. This is relevant if you are intending to integrate your SQL server into a HDInsights cluster.--insecure
Define an insecure connection (SSL disabled) to the Azure SQL database server (default isfalse
).
hive agent add azure --name azureAgent --db-server-name mysqlserver --database-name mydb1 --database-user azureuser --database-password mypassword --storage-account myadls2 --container-name mycontainer --root-folder /hive/warehouse --hdi-version 3.6
Add a filesystem hive agent to connect to your host's local filesystem using the hive agent add filesystem
command.
SYNOPSYS
hive agent add filesystem [--filesystem-id] string [--root-folder] string [[--name] string]
OPTIONS
--filesystem-id string
filesystem id to use by filesystem agent
[Mandatory]
--root-folder string
path to use as root folder by filesystem agent
[Mandatory]
--name string
name of the agent
[Optional, default = <nothing>]
--filesystem-id
The filesystem identifier to be used.--root-folder
The path to use as the root directory for the filesystem agent.
--name
The identifier to give to the new Hive agent.
hive agent add filesystem --filesystem-id myfilesystem --root-folder /var/lib/mysql --name fsAgent
Add an AWS Glue hive agent to connect to an AWS-hosted database using the hive agent add glue
command.
SYNOPSYS
hive agent add glue [[--name] string] [[--config-path] string] [[--access-key] string] [[--secret-key] string] [[--session-token] string] [[--glue-endpoint] string] [[--aws-region] string] [[--aws-catalog-credentials-provider-factory-class] string] [[--default-fs] string]
OPTIONS
--name string
name of the agent
[Optional, default = <nothing>]
--config-path string
path to a directory containing glue-site.xml
[Optional, default = <nothing>]
--access-key string
AWS access key. Valid for StaticCredentialsProviderFactory and SessionCredentialsProviderFactory.
[Optional, default = <nothing>]
--secret-key string
AWS secret key. Valid for StaticCredentialsProviderFactory and SessionCredentialsProviderFactory.
[Optional, default = <nothing>]
--session-token string
AWS session token. Valid only for SessionCredentialsProviderFactory
[Optional, default = <nothing>]
--glue-endpoint string
AWS Glue endpoint
[Optional, default = <nothing>]
--aws-region string
AWS region. Defaults to us-east-1
[Optional, default = <nothing>]
--aws-catalog-credentials-provider-factory-class string
AWS catalog credentials provider factory class.
[Optional, default = <nothing>]
--default-fs string
base URI of the filesystem to rewrite LOCATION strings into. I.e. s3:///test_bucket/
[Optional, default = <nothing>]
--name
The identifier to give to the new Hive agent.--config-path
The path to the directory containing theglue-site.xml
.--access-key
The AWS access key.--secret-key
The AWS secret key.--session-token
The AWS session token. Only valid if usingSessionCredentialsProviderFactory
for the--aws-catalog-credentials-provider-factory-class
parameter.--glue-endpoint
The AWS Glue endpoint for connections to databases.--aws-region
The AWS region that your database is located in (default isus-east-1
).--aws-catalog-credentials-provider-factory-class
The AWS catalog credentials provider factory class (default is DefaultAWSCredentialsProviderChain).--default-fs
The base URI of the filesystem to generate the locations within the database (for example:s3:///test_bucket/
).
hive agent add glue --name glueAgent --access-key ACCESS6HCFPAQIVZTKEY --secret-key SECRET1vTMuqKOIuhET0HAI78UIPfSRjcswTKEY --aws-region eu-west-1 --default-fs s3:///test_bucket/
Add a hive agent to connect to a local or remote Apache Hive metastore using the hive agent add hive
command.
Before deploying an agent, see our recommendations section on enabling the Hive metastore event listener for Apache Hive environments.
:::info remote deployments When connecting to a remote Apache Hive metastore, specify a host on the remote cluster that will be used to communicate with the local LiveData Migrator server (constrained to a user-defined port).
A small service will be deployed on this remote host so that the hive agent can migrate data to and/or from the remote Apache Hive metastore. :::
SYNOPSYS
hive agent add hive [[--config-path] string] [[--kerberos-principal] string] [[--kerberos-keytab] string] [[--name] string] [[--host] string] [[--port] integer] [--autodeploy] [[--ssh-user] string] [[--ssh-key] file] [[--ssh-port] int] [--use-sudo] [--ignore-host-checking]
OPTIONS
--config-path string
path to a directory containing hive configuration files
[Optional, default = <nothing>]
--kerberos-principal string
kerberos principal to use
[Optional, default = <nothing>]
--kerberos-keytab string
path to kerberos keytab file
[Optional, default = <nothing>]
--name string
name of the agent
[Optional, default = <nothing>]
--host string
host where remote hive agent will be deployed
[Optional, default = <nothing>]
--port integer
port to use by remote hive agent
[Optional, default = <nothing>]
--autodeploy automatically deploy remote agent. If specified, you must specify sshKey to connect.
[Optional, default = false]
--ssh-user string
ssh user to use for authentication on remote host to perform automatic deployment
[Optional, default = <nothing>]
--ssh-key file
ssh key to use for authentication on remote host to perform automatic deployment
[Optional, default = <nothing>]
--ssh-port int
ssh port to use to perform automatic deployment
[Optional, default = 22]
--use-sudo use sudo for privileged commands while performing remote installation
[Optional, default = false]
--ignore-host-checking ignore strict host key checking for unknown hosts
[Optional, default = false]
--config-path
The path to the directory containing the Hive configuration files (for example:/etc/hive/conf
).--kerberos-principal
The Kerberos principal to use to access the Hive service (for example:hive/[email protected]
). Not required if Kerberos is disabled.--kerberos-keytab
The path to the Kerberos keytab containing the principal to access the Hive service (for example:/etc/security/keytabs/hive.service.keytab
). Not required if Kerberos is disabled.--name
The identifier to give to the new Hive agent.
--host
The host where the remote hive agent will be deployed.--port
The port for the remote hive agent to use on the remote host. Default is5052
. This port is used to communicate with the local LiveData Migrator server.
--autodeploy
The remote agent will be automatically deployed when this flag is used. If using this, the--ssh-key
parameter must also be specified.--ssh-user
The SSH user to use for authentication on the remote host to perform automatic deployment (when using the--autodeploy
parameter).--ssh-key
The absolute path to the SSH private key to use for authentication on the remote host to perform automatic deployment (when using the--autodeploy
parameter).--ssh-port
The SSH port to use for authentication on the remote host to perform automatic deployment (when using the--autodeploy
parameter). Default is port22
.--use-sudo
All commands performed by the SSH user will usesudo
on the remote host when performing automatic deployment (using the--autodeploy
parameter).--ignore-host-checking
Ignore strict host key checking when performing the automatic deployment (using the--autodeploy
parameter).
If you do not wish to use the --autodeploy
function, follow these steps to deploy a remote hive agent for Apache Hive manually:
-
Transfer the remote server installer to your remote host:
scp /opt/wandisco/hivemigrator/hivemigrator-remote-server-installer.sh myRemoteHost:~
-
On your remote host, run the installer as root (or sudo) user in silent mode:
./hivemigrator-remote-server-installer.sh -- --silent
-
On your remote host, start the remote server service:
service hivemigrator-remote-server start
-
On your local host, run the
hive agent add hive
command without using--autodeploy
and its related parameters to configure your remote hive agent.See the Example for remote Apache Hive deployment - manual example below for further guidance.
hive agent add hive --name sourceAgent --kerberos-keytab /etc/security/keytabs/hive.service.keytab --kerberos-principal hive/[email protected]
hive agent add hive --name targetautoAgent --autodeploy --ssh-user root --ssh-key /root/.ssh/id_rsa --ssh-port 22 --host myRemoteHost.example.com --port 5052 --kerberos-keytab /etc/security/keytabs/hive.service.keytab --kerberos-principal hive/[email protected] --config-path /etc/hive/conf
hive agent add hive --name targetmanualAgent --host myRemoteHost.example.com --port 5052 --kerberos-keytab /etc/security/keytabs/hive.service.keytab --kerberos-principal hive/[email protected] --config-path /etc/hive/conf
:::note If specifying Kerberos and config path information for remote agents, ensure that the directories and Kerberos principal are correct for your chosen remote host (not your local host). :::
Check the configuration of an existing hive agent using hive agent check
.
SYNOPSYS
hive agent check [--name] string
OPTIONS
--name string
name of the agent
[Mandatory]
hive agent check --name azureAgent
Change the configuration of an existing Azure hive agent using hive agent configure azure
.
The parameters that can be changed are the same as the ones listed in the hive agent add azure
section.
All parameters are optional except --name
, which is required to specify the existing hive agent that you wish to configure.
hive agent configure azure --name azureAgent --database-password CorrectPassword
Change the configuration of an existing filesystem hive agent using hive agent configure filesystem
.
The parameters that can be changed are the same as the ones listed in the hive agent add filesystem
section.
All parameters are optional except --name
, which is required to specify the existing hive agent that you wish to configure.
hive agent configure filesystem --name fsAgent --root-folder /user/dbuser/databases
Change the configuration of an existing AWS Glue hive agent using hive agent configure glue
.
The parameters that can be changed are the same as the ones listed in the hive agent add glue
section.
All parameters are optional except --name
, which is required to specify the existing hive agent that you wish to configure.
hive agent configure glue --name glueAgent --aws-region us-east-2
Change the configuration of an existing Apache hive agent using hive agent configure hive
.
The parameters that can be changed are the same as the ones listed in the hive agent add hive
section.
All parameters are optional except --name
, which is required to specify the existing hive agent that you wish to configure.
hive agent configure hive --name sourceAgent --kerberos-keytab /opt/keytabs/hive.keytab --kerberos-principal hive/[email protected]
Delete the specified hive agent with hive agent delete
.
SYNOPSYS
hive agent delete [--name] string
OPTIONS
--name string
name of the agent
[Mandatory]
hive agent delete --name azureAgent
List configured hive agents with hive agent list
.
SYNOPSYS
hive agent list [--detailed]
OPTIONS
--detailed
[Optional, default = false]
hive agent list --detailed
Show the configuration of a hive agent with hive agent show
.
SYNOPSYS
hive agent show [--name] string
OPTIONS
--name string
name of the agent
[Mandatory]
hive agent show --name azureAgent
Print a list of supported hive agent types with hive agent types
.
SYNOPSYS
hive agent types
hive agent types
Create a hive migration rule that is used to define which databases and tables are migrated.
:::info Specify these rules when starting a new migration to control which databases and tables are migrated. :::
SYNOPSYS
hive rule add [--database-pattern] string [--table-pattern] string [[--name] string]
OPTIONS
--database-pattern string
Pattern to match database name. For example db* would include all databases starting with 'db', like 'dbname1'.
[Mandatory]
--table-pattern string
Pattern to match table name. For example table* would include all tables starting with 'table', like 'tablename1'.
[Mandatory]
--name string
Rule name
[Optional, default = <nothing>]
ALSO KNOWN AS
hive rule create
--database-pattern
Specify a regex pattern that will match the database names you want to migrate.--table-pattern
Specify a regex pattern that will match the table names you want to migrate.
:::tip
You can use a single asterisk (*
) if you want to match all databases and/or all tables within the metastore/database.
:::
--name
The name for the hive rule.
hive rule add --name test_databases --database-pattern test* --table-pattern *
Change the parameters of an existing hive rule.
The parameters that can be changed are the same as the ones listed in the hive rule add
,hive rule create
section.
All parameters are optional except --name
, which is required to specify the existing hive rule that you wish to configure.
hive rule configure --name test_databases --database-pattern test_db*
Delete a hive rule.
SYNOPSYS
hive rule delete [--name] string
OPTIONS
--name string
Rule name
[Mandatory]
hive rule delete --name test_databases
List all hive rules.
SYNOPSYS
hive rule list
hive rule list
Show details of a hive rule.
SYNOPSYS
hive rule show [--name] string
OPTIONS
--name string
Rule name
[Mandatory]
hive rule show --name test_databases
Create a new hive migration to initiate metadata migration from your source database or metastore.
:::info Create hive rules before initiating a hive migration to specify which databases and tables are migrated. :::
SYNOPSYS
hive migration add [--source] string [--target] string [[--rule-names] list] [[--name] string] [--auto-start] [--once]
OPTIONS
--source string
name of agent which should be a source of migration
[Mandatory]
--target string
name of agent which should be a target of migration
[Mandatory]
--rule-names list
rule name or a list of rule names to match objects on a source & target for migration. List of rules can be specified separated by the comma (','). You can also use quotes if there are spaces around commas.
[Optional, default = <nothing>]
--name string
migration name
[Optional, default = <nothing>]
--auto-start immediately start created migration
[Optional, default = <nothing>]
--once migration will be stopped after all matching objects are migrated
[Optional, default = <nothing>]
--source
The name of the hive agent for the source of migration.--target
The name of the hive agent for the target of migration.
--rule-names
The rule name or list of rule names to use with the migration. Multiple rules need to be comma-separated (for example:rule1,rule2,rule3
).--name
The name to identify the migration with.--auto-start
Specify this parameter to start the migration immediately after creation.--once
Specify this parameter to perform a one-time migration, and not continuously scan for new or changing metadata.
hive migration add --source sourceAgent --target remoteAgent --rule-names test_dbs,user_dbs --name hive_migration --auto-start
Delete a hive migration.
:::note
A hive migration must be stopped state before it can be deleted, this can be achieved by using the --force-stop
parameter with this command.
:::
SYNOPSYS
hive migration delete [--name] string [--force-stop]
OPTIONS
--name string
migration name
[Mandatory]
--force-stop also stop migration if it's in progress. By default, migration should be stopped before delete.
[Optional, default = false]
hive migration delete --name hive_migration --force-stop
List all hive migrations.
SYNOPSYS
hive migration list
hive migration list
Pause a hive migration or a list of hive migrations (comma-separated).
SYNOPSYS
hive migration pause [--names] list
OPTIONS
--names list
migration name. List of migrations can be specified separated by comma (','). You can also use quotes if there are spaces around commas.
[Mandatory]
hive migration pause --names hive_migration1,hive_migration2
Pause all hive migrations.
SYNOPSYS
hive migration pause --all
hive migration pause --all
Resume a paused hive migration or a list of paused hive migrations (comma-separated).
SYNOPSYS
hive migration resume [--names] list
OPTIONS
--names list
migration name. List of migrations can be specified separated by comma (','). You can also use quotes if there are spaces around commas.
[Mandatory]
hive migration resume --names hive_migration1,hive_migration2
Resume all hive migrations.
SYNOPSYS
hive migration resume --all
hive migration resume --all
Display information about a hive migration.
SYNOPSYS
hive migration show [--name] string
OPTIONS
--name string
migration name
[Mandatory]
hive migration show --name hive_migration
Start a hive migration or a list of hive migrations (comma-separated).
:::note
Specify the --once
parameter to perform a one-time migration, and not continuously scan for new or changing metadata.
:::
SYNOPSYS
hive migration start [--names] list [--once]
OPTIONS
--names list
migration name. List of migrations can be specified separated by comma (','). You can also use quotes if there are spaces around commas.
[Mandatory]
--once
[Optional, default = false]
hive migration start --names hive_migration1,hive_migration2
Start all hive migrations.
:::note
Specify the --once
parameter to perform a one-time migration, and not continuously scan for new or changing metadata.
:::
SYNOPSYS
hive migration start --all [--once]
OPTIONS
--once
[Optional, default = false]
hive migration start --all --once
Show the status of a hive migration or a list of hive migrations (comma-separated).
SYNOPSYS
hive migration status [--names] list
OPTIONS
--names list
migration name. List of migrations can be speficied separated by comma (','). You can also use quotes if there are spaces around commas.
[Mandatory]
hive migration status --names hive_migration1,hive_migration2
Show the status of all hive migrations.
SYNOPSYS
hive migration status --all
hive migration status --all
Stop a running hive migration or a list of running hive migrations (comma-separated).
SYNOPSYS
hive migration stop [--names] list
OPTIONS
--names list
migration name. List of migrations can be speficied separated by comma (','). You can also use quotes if there are spaces around commas.
[Mandatory]
hive migration stop --names hive_migration1,hive_migration2
Stop all running hive migrations.
SYNOPSYS
hive migration stop --all
hive migration stop --all
Show the details of the active license.
SYNOPSYS
license show [--full]
OPTIONS
--full show license details
[Optional, default = false]
Upload a new license by submitting its location on the local filesystem.
SYNOPSYS
license upload [--path] string
OPTIONS
--path string
path to the license file in the local filesystem
[Mandatory]
license upload --path /user/hdfs/license.key
Connect to the LiveData Migrator service on your LiveData Migrator host with this command.
:::note
This is a manual method of connecting to the LiveData Migrator service as the livedata-migrator --host=localhost
command (shown in the CLI - Log in section) will attempt to establish this connection automatically.
:::
SYNOPSYS
connect livemigrator [--host] string
[--ssl]
[[--port] int]
[[--timeout] integer]
OPTIONS
--host string
host to connect to
[Mandatory]
--ssl use ssl connection
[Optional, default = false]
--port int
port of the service, either http or https, depending on the ssl flag
[Optional, default = 18080]
--timeout integer
override default(5m) connection timeout in milliseconds
[Optional, default = <nothing>]
--host
The hostname or IP address for the LiveData Migrator host.
--ssl
Specify this parameter if you want to establish an SSL connection to LiveData Migrator. Enable Server SSL on the LiveData Migrator service before using this parameter.--port
The LiveData Migrator port to connect on (default is18080
).--timeout
Define the connection timeout in milliseconds. Set this parameter to override the default connection timeout of 5 minutes (300000ms).
connect livemigrator --host localhost --port 18080
Connect to the HiveMigrator service on your LiveData Migrator host with this command.
:::note
This is a manual method of connecting to the HiveMigrator service as the livedata-migrator --host=localhost
command (shown in the CLI - Log in section) will attempt to establish this connection automatically.
:::
SYNOPSYS
connect hivemigrator [--host] string
[--ssl]
[[--port] int]
[[--timeout] long]
OPTIONS
--host string
host to connect to
[Mandatory]
--ssl use ssl connection
[Optional, default = false]
--port int
host to connect to
[Optional, default = 6780]
--timeout long
override default(5m) connection timeout in milliseconds
[Optional, default = <nothing>]
--host
The hostname or IP address for the LiveData Migrator host that contains the HiveMigrator service.
--ssl
Specify this parameter if you want to establish an SSL connection to HiveMigrator.--port
The HiveMigrator service port to connect on (default is6780
).--timeout
Define the connection timeout in milliseconds. Set this parameter to override the default connection timeout of 5 minutes (300000ms).
connect hivemigrator --host localhost --port 6780
Clear the interactive action prompt screen output with the clear
command. You can also type <Ctrl-L>
to achieve the same, even while typing another command.
SYNOPSYS
clear
Prints whatever text that you write to the console. This can be used to sanity check a command before running it (for example: echo migration add --path /repl1 --target mytarget –-migration-id myNewMigration --exclusions 100mbfiles
).
SYNOPSYS
echo [--message] string
OPTIONS
--message string
[Mandatory]
Entering either exit
or quit
will stop operation of LiveData Migrator when it is run from the command line. All processing will cease, and you will be returned to your system shell.
If your LiveData Migrator command line is connected to a LiveData Migrator system service, this command will end your interactive session with that service, which will remain in operation to continue processing Live migrations.
If this command is encountered during non-interactive processing of input (such as when you pipe input to an instance as part of another shell script) no further commands contained in that input will be processed.
SYNOPSYS
exit
ALSO KNOWN AS
quit
Use the help
command to get details of all commands available from the action prompt.
SYNOPSYS
help [[-C] string]
OPTIONS
-C or --command string
The command to obtain help for.
[Optional, default = <none>]
help migration\ list
NAME
migration list - List running and active migrations.
SYNOPSYS
migration list
Enter history
at the action prompt to list all previously entered commands.
Entering history --file <filename>
will save up to 500 most recently entered commands in text form to the file specified. Use this to record commands that you have executed.
SYNOPSYS
history [[--file] file]
OPTIONS
--file file
A file to save history to.
[Optional, default = <none>]
--file
The name of the file in which to save the history of commands.
Load and execute commands from a text file using the script --file <filename>
command. This file should have one command per line, and each will be executed as though they were entered directly at the action prompt in that sequence.
SYNOPSYS
script [--file] file
OPTIONS
--file file
[Mandatory]
--file
The name of the file containing script commands.
Use the stacktrace
command to get full technical information about the source of an error during LiveData Migrator operation.
SYNOPSYS
stacktrace
The action prompt provides many features to guide you during operation.
Feature | How to use it |
---|---|
Review available commands | Commands that cannot be used without creating other resources first are tagged with * in the output of the help command. |
Command completion | Hit the <tab> key at any time to get assistance or to complete partially-entered commands. |
Cancel input | Type <Ctrl-C> before entering a command to return to an empty action prompt. |
Syntax indication | Invalid commands are highlighted as you type. |
Clear the display | Type <Ctrl-L> at any time. |
Previous commands | Navigate previous commands using the up and down arrows, and use standard emacs shortcuts. |
Interactive or scripted operation | You can interact with the command line interface directly, or send it commands on standard input to incorporate it into shell scripts. |
The service scripts can be used to control operation of each individual service at any time.
service livedata-migrator start|stop|force-reload|restart|status
service hivemigrator start|stop|force-reload|restart|status
service livedata-ui start|stop|force-reload|restart|status
The following commands will only affect logging of the CLI terminal, and will not affect other components of LiveData Migrator:
log off
log info
log debug
log trace