-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate paddle backend in devel branch #4157
base: devel
Are you sure you want to change the base?
Integrate paddle backend in devel branch #4157
Conversation
double-backward
📝 WalkthroughWalkthroughThe pull request introduces significant updates across multiple files, primarily enhancing the DeepMD framework's compatibility with the Paddle backend. Key changes include the addition of new classes and methods for atomic modeling, descriptors, and fitting networks, as well as modifications to existing functionalities. The Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
backend/read_env.py (1)
156-156
: Consider using more descriptive names for used variables.
While the tuple unpacking is correct, consider using descriptive names for the variables you actually use instead of discarding all but two with underscores. This improves code readability.
- cmake_minimum_required_version, cmake_args, _, _, _, _, _ = get_argument_from_env()
+ cmake_minimum_required_version, cmake_args, *_ = get_argument_from_env()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (4)
- backend/read_env.py (5 hunks)
- deepmd/dpmodel/atomic_model/base_atomic_model.py (1 hunks)
- deepmd/dpmodel/descriptor/se_t_tebd.py (1 hunks)
- deepmd/pd/model/network/mlp.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- deepmd/dpmodel/descriptor/se_t_tebd.py
🧰 Additional context used
🪛 Ruff
deepmd/pd/model/network/mlp.py
281-281: Local variable prec
is assigned to but never used
Remove assignment to unused variable prec
(F841)
🔇 Additional comments (11)
backend/read_env.py (4)
13-16
: LGTM! Import statements are well-organized.
The new imports for Paddle backend follow the consistent pattern established for other backends.
28-28
: LGTM! Function signature and docstring updates are accurate.
The return type annotation and docstring are correctly updated to include the Paddle version.
Also applies to: 47-48
150-150
: LGTM! Return value update is consistent.
The pd_version is correctly added to the return tuple.
126-137
: LGTM! Paddle configuration follows consistent patterns.
The implementation correctly handles Paddle backend configuration similar to other backends.
Let's verify the environment variable handling:
✅ Verification successful
The verification results show consistent handling of environment variables across all backend configurations. Let me generate the final response:
Environment variable handling for Paddle backend is consistent and correct
The implementation follows the established pattern used across all backend configurations:
- Default value of "0" (disabled by default) matches PyTorch and Paddle patterns
- Binary "0"/"1" toggle behavior is consistent
- CMake args are properly set based on the environment variable state
- Version handling follows the same pattern as other backends
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all backend environment variables are handled consistently
# Test: Search for environment variable handling patterns
rg -A 2 'os\.environ\.get\("DP_ENABLE_'
Length of output: 1687
deepmd/dpmodel/atomic_model/base_atomic_model.py (1)
213-213
: LGTM: Type casting fix ensures compatibility across backends.
The explicit type casting of atom_mask
to match ret_dict[kk].dtype
ensures proper type compatibility during multiplication, particularly important for the paddle backend which has stricter type checking. This change aligns with tensorflow's behavior regarding type compatibility in tensor operations.
Let's verify the fix works consistently across backends:
✅ Verification successful
Based on the extensive search results, I can now provide a definitive response about the type casting change:
Type casting fix is correctly implemented and consistent with paddle backend requirements
The explicit type casting of atom_mask
to match ret_dict[kk].dtype
is consistent with type handling patterns across the codebase, particularly in the paddle backend. The evidence shows:
- Similar type casting patterns are used throughout paddle backend files for mask operations:
weight = weight * mask.unsqueeze(-1).astype(weight.dtype)
length = length + (~mask.unsqueeze(-1)).astype(length.dtype)
- The paddle backend consistently ensures type compatibility in tensor operations by explicitly casting masks to match the target tensor's dtype:
env_mat *= exclude_mask.unsqueeze(-1).astype(env_mat.dtype)
outs = outs * mask[:, :, None].astype(outs.dtype)
The change aligns with paddle's strict type checking requirements while maintaining compatibility with other backends.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for similar type casting patterns across the codebase
# to ensure consistent handling across different backends
# Look for similar mask operations in other backend implementations
rg -A 2 "mask.*dtype" --type python
# Check for any remaining type-related issues in tensor operations
rg -A 2 "TypeError.*cannot.*multiply" --type python
Length of output: 153
Script:
#!/bin/bash
# Let's try without the file type flag and focus on mask operations
rg -A 2 "mask.*dtype"
# Search for type-related patterns in tensor operations
rg -A 2 "astype.*dtype"
# Look for any error handling related to type mismatches
rg -A 2 "TypeError.*multiply"
# Check for similar patterns in paddle-specific code
rg -A 2 "paddle.*dtype"
Length of output: 104142
deepmd/pd/model/network/mlp.py (6)
1-50
: LGTM! Well-organized imports and utility function.
The imports are logically grouped and the empty_t
utility function is appropriately implemented.
52-72
: LGTM! Clean Identity layer implementation.
The Identity layer is well-implemented with proper serialization support and version tracking.
154-154
: Fix type comparison operator.
Using is
for type comparison may lead to unexpected behavior. Use ==
instead.
281-281
: Remove unused variable.
The variable prec
is assigned but never used.
🧰 Tools
🪛 Ruff
281-281: Local variable prec
is assigned to but never used
Remove assignment to unused variable prec
(F841)
233-234
: Remove commented-out else clause.
The commented-out else: yy = yy
is unnecessary and can be removed.
299-328
: LGTM! Clean network collection implementation.
The MLP and NetworkCollection classes properly adapt the base implementations for PaddlePaddle, with clear network type mappings.
with paddle.no_grad(): | ||
self.matrix.fill_(0.0) | ||
if use_bias and self.bias is not None: | ||
with paddle.no_grad(): | ||
self.bias.fill_(1.0) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify nested no_grad context.
The nested with paddle.no_grad()
statements are redundant.
def _zero_init(self, use_bias=True):
with paddle.no_grad():
self.matrix.fill_(0.0)
if use_bias and self.bias is not None:
- with paddle.no_grad():
- self.bias.fill_(1.0)
+ self.bias.fill_(1.0)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
with paddle.no_grad(): | |
self.matrix.fill_(0.0) | |
if use_bias and self.bias is not None: | |
with paddle.no_grad(): | |
self.bias.fill_(1.0) | |
with paddle.no_grad(): | |
self.matrix.fill_(0.0) | |
if use_bias and self.bias is not None: | |
self.bias.fill_(1.0) |
Hello @njzjz , I have fixed several bugs in the Paddle framework(as newly updated in this PR description), and now most of the unit tests can pass (a few tests need to wait for the PR to be merged in the next couple of days). Currently, I found that Pytorch seems to not be found properly in CMAKE(https://github.com/deepmodeling/deepmd-kit/actions/runs/11570827128/job/32207559165?pr=4157#step:6:575), but my PR did not modify any Pytorch-related code in CMAKE. Could you please help to look into the reason? |
…curate report && remove CMAKE_PREFIX setting in read_env for paddle
@HydrogenSulfate could we schedule a meeting to discuss some details? (FYI @wanghan-iapcm @iProzd ) |
Thanks for reply, and please let me know when you are available and we can contact via remote meeting software. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
README.md (1)
75-76
: Consider enhancing the v3 highlights section.
While the addition of Paddle backend is correctly noted, consider providing more details about its benefits and capabilities, similar to how other major features are documented in previous versions.
For example:
#### v3
- Multiple backends supported. Add PyTorch, JAX and Paddle backends.
+ Multiple backends supported. Add PyTorch, JAX, and Paddle backends:
+ - Enhanced flexibility in choosing deep learning frameworks
+ - Support for Paddle's DCU devices
+ - Improved compatibility with existing DeePMD-kit features
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
- README.md (2 hunks)
🧰 Additional context used
🪛 LanguageTool
README.md
[uncategorized] ~22-~22: Possible missing comma found.
Context: ... including TensorFlow, PyTorch, JAX and Paddle the most popular deep learning framewor...
(AI_HYDRA_LEO_MISSING_COMMA)
README.md
Outdated
@@ -19,7 +19,7 @@ For more information, check the [documentation](https://deepmd.readthedocs.io/). | |||
|
|||
### Highlighted features | |||
|
|||
- **interfaced with multiple backends**, including TensorFlow, PyTorch, and JAX, the most popular deep learning frameworks, making the training process highly automatic and efficient. | |||
- **interfaced with multiple backends**, including TensorFlow, PyTorch, JAX and Paddle the most popular deep learning frameworks, making the training process highly automatic and efficient. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix grammar in the backends list.
The list of supported backends needs better punctuation for clarity.
Apply this change:
- including TensorFlow, PyTorch, JAX and Paddle the most popular deep learning frameworks
+ including TensorFlow, PyTorch, JAX, and Paddle, the most popular deep learning frameworks
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
- **interfaced with multiple backends**, including TensorFlow, PyTorch, JAX and Paddle the most popular deep learning frameworks, making the training process highly automatic and efficient. | |
- **interfaced with multiple backends**, including TensorFlow, PyTorch, JAX, and Paddle, the most popular deep learning frameworks, making the training process highly automatic and efficient. |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~22-~22: Possible missing comma found.
Context: ... including TensorFlow, PyTorch, JAX and Paddle the most popular deep learning framewor...
(AI_HYDRA_LEO_MISSING_COMMA)
We are available on Friday morning (Beijing time). |
I’ve created a new Tencent Meeting. Please confirm if this time works for you @njzjz HydrodenSulfate 邀请您参加腾讯会议 点击链接入会,或添加至会议列表: #腾讯会议:951-529-899 复制该信息,打开手机腾讯会议即可参与 |
6280a5a
to
d03702a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
deepmd/pd/model/network/init.py (1)
56-57
: Improve variable naming for better readabilityThe variable names
l
andu
are ambiguous. Consider using more descriptive names likelower_bound
andupper_bound
.- l = norm_cdf((a - mean) / std) - u = norm_cdf((b - mean) / std) + lower_bound = norm_cdf((a - mean) / std) + upper_bound = norm_cdf((b - mean) / std)🧰 Tools
🪛 Ruff
56-56: Ambiguous variable name:
l
(E741)
deepmd/pd/infer/deep_eval.py (1)
451-452
: Avoid informal language in commentsThe comments
# this is kinda hacky
at lines 452 and 538 could be rephrased to provide clearer context and maintain a professional tone. Consider explaining why the approach is a workaround or how it could be improved in the future.Also applies to: 538-538
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
deepmd/pd/infer/deep_eval.py
(1 hunks)deepmd/pd/model/network/init.py
(1 hunks)
🧰 Additional context used
🪛 Ruff
deepmd/pd/infer/deep_eval.py
367-370: Use ternary operator natoms = len(atom_types[0]) if mixed_type else len(atom_types)
instead of if
-else
-block
Replace if
-else
-block with natoms = len(atom_types[0]) if mixed_type else len(atom_types)
(SIM108)
deepmd/pd/model/network/init.py
56-56: Ambiguous variable name: l
(E741)
330-330: No explicit stacklevel
keyword argument found
(B028)
380-380: No explicit stacklevel
keyword argument found
(B028)
🪛 GitHub Check: CodeQL
deepmd/pd/infer/deep_eval.py
[notice] 66-66: Unused import
Import of 'ase' is not used.
🔇 Additional comments (4)
deepmd/pd/model/network/init.py (1)
1-458
: Implementation looks good overall!
The tensor initialization functions are well-implemented and properly documented. The code successfully adapts PyTorch's initialization module for Paddle while maintaining API compatibility. The implementation includes proper error handling and comprehensive docstrings with examples.
🧰 Tools
🪛 Ruff
56-56: Ambiguous variable name: l
(E741)
330-330: No explicit stacklevel
keyword argument found
(B028)
380-380: No explicit stacklevel
keyword argument found
(B028)
deepmd/pd/infer/deep_eval.py (3)
115-120
: Replace assert
statements with explicit exception handling
Using assert
statements for input validation is not recommended in production code because assertions can be disabled in optimized mode (e.g., when using the -O
flag). Replace the assert
statements with explicit if
conditions and raise appropriate exceptions, such as ValueError
, to ensure that the checks are always enforced.
Apply this diff to implement the change:
- assert (
- head is not None
- ), f"Head must be set for multitask model! Available heads are: {model_keys}"
+ if head is None:
+ raise ValueError(f"Head must be set for multitask model! Available heads are: {model_keys}"
- assert (
- head in model_keys
- ), f"No head named {head} in model! Available heads are: {model_keys}"
+ if head not in model_keys:
+ raise ValueError(f"No head named {head} in model! Available heads are: {model_keys}")
367-370
: Simplify if
-else
block using a ternary operator
You can simplify the code by replacing the if
-else
block with a ternary operator for better readability.
Apply this diff to make the change:
- if mixed_type:
- natoms = len(atom_types[0])
- else:
- natoms = len(atom_types)
+ natoms = len(atom_types[0]) if mixed_type else len(atom_types)
🧰 Tools
🪛 Ruff
367-370: Use ternary operator natoms = len(atom_types[0]) if mixed_type else len(atom_types)
instead of if
-else
-block
Replace if
-else
-block with natoms = len(atom_types[0]) if mixed_type else len(atom_types)
(SIM108)
371-372
: Replace assert
with explicit exception handling
Using assert
statements for runtime checks may lead to unintended behavior if the code is executed in optimized mode. Replace the assert
with an if
statement and raise an appropriate exception to ensure the check is always performed.
Apply this diff to make the change:
- if natoms == 0:
- assert coords.size == 0
+ if natoms == 0 and coords.size != 0:
+ raise ValueError("Coordinates must be empty when there are no atoms.")
warnings.warn("Initializing zero-element tensors is a no-op") | ||
return tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add stacklevel parameter to warnings
When using warnings.warn, include the stacklevel parameter to help users locate the source of the warning in their code.
- warnings.warn("Initializing zero-element tensors is a no-op")
+ warnings.warn("Initializing zero-element tensors is a no-op", stacklevel=2)
Also applies to: 380-381
🧰 Tools
🪛 Ruff
330-330: No explicit stacklevel
keyword argument found
(B028)
elif ( | ||
not isinstance(param, bool) | ||
and isinstance(param, int) | ||
or isinstance(param, float) | ||
): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix conditional logic for type checking
The current condition has unclear operator precedence which could lead to unexpected behavior. Parentheses should be added to make the logic more explicit.
- elif (
- not isinstance(param, bool)
- and isinstance(param, int)
- or isinstance(param, float)
- ):
+ elif (
+ not isinstance(param, bool)
+ and (isinstance(param, int) or isinstance(param, float))
+ ):
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
elif ( | |
not isinstance(param, bool) | |
and isinstance(param, int) | |
or isinstance(param, float) | |
): | |
elif ( | |
not isinstance(param, bool) | |
and (isinstance(param, int) or isinstance(param, float)) | |
): |
deepmd/pd/utils/aux.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit funny, Windows doesn't allow a filename of aux
: https://stackoverflow.com/a/67226358/9567349
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit funny, Windows doesn't allow a filename of
aux
: https://stackoverflow.com/a/67226358/9567349
Thanks! I had previously suspected this reason, but didn’t get around to verifying it. The file name is indeed not very appropriate, and I will change it to decomp.py
and add detailed explanations in the file later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 26
🧹 Outside diff range and nitpick comments (18)
deepmd/pd/model/descriptor/descriptor.py (2)
37-41
: Enhance class documentation with more details.While the current documentation provides a basic overview, consider adding:
- Examples of typical usage
- Description of the plugin architecture
- List of available implementations
45-54
: Improve error message specificity.The error message could be more helpful by listing available descriptor types.
Consider updating the error message:
- "the type of DescriptorBlock should be set by `type`" + f"the type of DescriptorBlock should be set by `type`. Available types: {cls.get_available_types()}"deepmd/pd/model/atomic_model/pairtab_atomic_model.py (1)
243-253
: Add input validation for optional parameters.The method accepts several optional parameters but doesn't validate their shapes when provided. Consider adding validation to ensure
mapping
,fparam
, andaparam
have correct shapes when provided.def forward_atomic( self, extended_coord: paddle.Tensor, extended_atype: paddle.Tensor, nlist: paddle.Tensor, mapping: Optional[paddle.Tensor] = None, fparam: Optional[paddle.Tensor] = None, aparam: Optional[paddle.Tensor] = None, do_atomic_virial: bool = False, comm_dict: Optional[dict[str, paddle.Tensor]] = None, ) -> dict[str, paddle.Tensor]: + # Validate shapes of optional parameters if provided + if mapping is not None: + if mapping.shape[0] != extended_coord.shape[0]: + raise ValueError(f"Expected mapping to have {extended_coord.shape[0]} frames, got {mapping.shape[0]}") + if fparam is not None: + if fparam.shape[0] != extended_coord.shape[0]: + raise ValueError(f"Expected fparam to have {extended_coord.shape[0]} frames, got {fparam.shape[0]}") + if aparam is not None: + if aparam.shape[0] != extended_coord.shape[0]: + raise ValueError(f"Expected aparam to have {extended_coord.shape[0]} frames, got {aparam.shape[0]}")deepmd/pd/model/descriptor/repformers.py (4)
203-203
: Add more context to assertion message.The assertion lacks an error message explaining why
len(sel)
must be 1.- assert len(sel) == 1 + assert len(sel) == 1, "Only single selection group is supported"
391-393
: Improve assertion messages for better debugging.The assertions lack error messages that would help users understand why the conditions must be true.
- assert mapping is not None - assert extended_atype_embd is not None + assert mapping is not None, "mapping is required when comm_dict is None" + assert extended_atype_embd is not None, "extended_atype_embd is required when comm_dict is None"
450-450
: Remove unused loop variable.The loop variable
idx
is not used within the loop body.- for idx, ll in enumerate(self.layers): + for _, ll in enumerate(self.layers):🧰 Tools
🪛 Ruff
450-450: Loop control variable
idx
not used within loop bodyRename unused
idx
to_idx
(B007)
528-532
: Simplify conditional assignment using ternary operator.The assignment to
sampled
can be simplified using a ternary operator.- if callable(merged): - # only get data for once - sampled = merged() - else: - sampled = merged + sampled = merged() if callable(merged) else merged🧰 Tools
🪛 Ruff
528-532: Use ternary operator
sampled = merged() if callable(merged) else merged
instead ofif
-else
-block(SIM108)
deepmd/pd/model/descriptor/dpa2.py (2)
509-509
: Rename unused loop variable.The loop variable
ii
is not used within the loop body. Rename it to_
to indicate it's intentionally unused.- for ii, descrpt in enumerate(descrpt_list): + for _, descrpt in enumerate(descrpt_list):🧰 Tools
🪛 Ruff
509-509: Loop control variable
ii
not used within loop bodyRename unused
ii
to_ii
(B007)
694-694
: Remove unused variable assignment.The
env_mat
variable is assigned but never used.- env_mat = repformers_variable.pop("env_mat") + _ = repformers_variable.pop("env_mat") # Discard unused value🧰 Tools
🪛 Ruff
694-694: Local variable
env_mat
is assigned to but never usedRemove assignment to unused variable
env_mat
(F841)
deepmd/pd/model/descriptor/repformer_layer.py (1)
580-615
: Enhance input validation and documentation for update mechanisms.The class initialization has many parameters but could benefit from better validation and documentation:
- The update_style parameter needs documentation for its options
- Some parameter combinations might be invalid
Consider adding parameter validation and improving documentation:
def __init__( self, rcut, rcut_smth, sel: int, ntypes: int, g1_dim=128, g2_dim=16, axis_neuron: int = 4, update_chnnl_2: bool = True, update_g1_has_conv: bool = True, update_g1_has_drrd: bool = True, update_g1_has_grrg: bool = True, update_g1_has_attn: bool = True, update_g2_has_g1g1: bool = True, update_g2_has_attn: bool = True, update_h2: bool = False, attn1_hidden: int = 64, attn1_nhead: int = 4, attn2_hidden: int = 16, attn2_nhead: int = 4, attn2_has_gate: bool = False, activation_function: str = "tanh", + # One of: "res_avg" (residual averaging), "res_incr" (incremental residuals), + # or "res_residual" (learned residual weights) update_style: str = "res_avg", update_residual: float = 0.001, update_residual_init: str = "norm", smooth: bool = True, precision: str = "float64", trainable_ln: bool = True, ln_eps: Optional[float] = 1e-5, use_sqrt_nnei: bool = True, g1_out_conv: bool = True, g1_out_mlp: bool = True, seed: Optional[Union[int, list[int]]] = None, ): """Initialize the RepformerLayer. Raises ------ ValueError If invalid parameter combinations are provided: - update_g2_has_attn=True but update_chnnl_2=False - update_h2=True but update_chnnl_2=False """deepmd/pd/model/descriptor/env_mat.py (3)
1-1
: Add a copyright notice.Consider adding a copyright notice with the current year and the name of the copyright holder to assert ownership and protect intellectual property rights.
+# Copyright 2024 DeepMD Developers # SPDX-License-Identifier: LGPL-3.0-or-later
5-10
: Consolidate imports.To improve readability and maintainability, consider consolidating the imports from the same module into a single import statement.
-from deepmd.pd.utils import ( - decomp, -) -from deepmd.pd.utils.preprocess import ( - compute_smooth_weight, -) +from deepmd.pd.utils import decomp +from deepmd.pd.utils.preprocess import compute_smooth_weight
49-87
: Provide more detailed documentation.The
prod_env_mat
function is well-structured and follows a clear logic flow. However, the documentation could be improved to provide more clarity and context for users.Consider enhancing the documentation by:
Explaining the purpose and significance of the smooth environment matrix in the context of the DeepMD framework.
Providing more details on the expected shape and meaning of each input parameter. For example:
- Clarify the relationship between
nframes
,nall
, andnloc
.- Explain the significance of the
sec
dimension inmean
andstddev
.- Describe the role of the
radial_only
parameter and its impact on the output.Elaborating on the returned values, including:
- The shape and meaning of
env_mat_se_a
.- The purpose and shape of
diff
andswitch
.Adding references to relevant papers, algorithms, or theoretical concepts that underpin the calculation of the smooth environment matrix.
By providing more comprehensive documentation, users will have a better understanding of how to use the
prod_env_mat
function effectively and interpret its results correctly.deepmd/pd/model/model/make_model.py (2)
278-321
: Consider uncommenting the type checking code for input consistency.The
input_type_cast
method casts the input data to the global float type. However, lines 296-302 contain commented-out code that checks for type consistency between input parameters and logs a warning if there is a mismatch. Uncommenting this code could help identify potential issues with input data types.Apply this change:
-# for vv, kk in zip([fparam, aparam], ["frame", "atomic"]): -# if vv is not None and self.reverse_precision_dict[vv.dtype] != input_prec: -# log.warning( -# f"type of {kk} parameter {self.reverse_precision_dict[vv.dtype]}" -# " does not match" -# f" that of the coordinate {input_prec}" -# ) +for vv, kk in zip([fparam, aparam], ["frame", "atomic"]): + if vv is not None and self.reverse_precision_dict[vv.dtype] != input_prec: + log.warning( + f"type of {kk} parameter {self.reverse_precision_dict[vv.dtype]}" + f" does not match that of the coordinate {input_prec}" + )
335-336
: Usekey in dict
instead ofkey in dict.keys()
.In lines 335-336, the code checks for the existence of a key in the dictionary using
dict.keys()
. It is more efficient and concise to usekey in dict
instead.Apply this change:
- if kk not in model_ret.keys(): + if kk not in model_ret:🧰 Tools
🪛 Ruff
335-335: Use
key in dict
instead ofkey in dict.keys()
Remove
.keys()
(SIM118)
336-336: Use
key not in dict
instead ofkey not in dict.keys()
Remove
.keys()
(SIM118)
deepmd/pd/model/descriptor/se_atten.py (3)
84-85
: Avoid usingtype
as a parameter name to prevent shadowing the built-in functionThe parameter
type
in the__init__
method shadows Python's built-intype
function, which can lead to confusion or unexpected behavior. Consider renaming this parameter to something more descriptive, such asdescriptor_type
.Apply this diff to rename the parameter:
- type: Optional[str] = None, + descriptor_type: Optional[str] = None,And update any references to
type
within the method accordingly.
152-152
: Unnecessary deletion oftype
variableThe statement
del type
is used to delete thetype
variable to avoid conflicts with the built-intype
function. If you rename the parameter as suggested, this deletion becomes unnecessary and can be removed.Apply this diff to remove the unnecessary deletion:
- del type
357-361
: Simplify Conditional Assignment with Ternary OperatorThe conditional assignment can be simplified using a ternary operator for better readability and conciseness.
Apply this diff to simplify the code:
- if callable(merged): - # only get data for once - sampled = merged() - else: - sampled = merged + sampled = merged() if callable(merged) else merged🧰 Tools
🪛 Ruff
357-361: Use ternary operator
sampled = merged() if callable(merged) else merged
instead ofif
-else
-block(SIM108)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (13)
deepmd/pd/loss/ener.py
(1 hunks)deepmd/pd/model/atomic_model/pairtab_atomic_model.py
(1 hunks)deepmd/pd/model/descriptor/descriptor.py
(1 hunks)deepmd/pd/model/descriptor/dpa2.py
(1 hunks)deepmd/pd/model/descriptor/env_mat.py
(1 hunks)deepmd/pd/model/descriptor/gaussian_lcc.py
(1 hunks)deepmd/pd/model/descriptor/repformer_layer.py
(1 hunks)deepmd/pd/model/descriptor/repformers.py
(1 hunks)deepmd/pd/model/descriptor/se_atten.py
(1 hunks)deepmd/pd/model/descriptor/se_t_tebd.py
(1 hunks)deepmd/pd/model/model/make_model.py
(1 hunks)deepmd/pd/model/model/spin_model.py
(1 hunks)deepmd/pd/model/model/transform_output.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- deepmd/pd/loss/ener.py
- deepmd/pd/model/descriptor/gaussian_lcc.py
🧰 Additional context used
🪛 Ruff
deepmd/pd/model/descriptor/dpa2.py
88-88: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
509-509: Loop control variable ii
not used within loop body
Rename unused ii
to _ii
(B007)
694-694: Local variable env_mat
is assigned to but never used
Remove assignment to unused variable env_mat
(F841)
deepmd/pd/model/descriptor/repformer_layer.py
967-967: Local variable ng2
is assigned to but never used
Remove assignment to unused variable ng2
(F841)
1156-1159: Use ternary operator gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
instead of if
-else
-block
Replace if
-else
-block with gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
(SIM108)
1277-1277: Local variable nitem
is assigned to but never used
Remove assignment to unused variable nitem
(F841)
deepmd/pd/model/descriptor/repformers.py
99-99: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
377-377: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
450-450: Loop control variable idx
not used within loop body
Rename unused idx
to _idx
(B007)
528-532: Use ternary operator sampled = merged() if callable(merged) else merged
instead of if
-else
-block
(SIM108)
deepmd/pd/model/descriptor/se_atten.py
62-62: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
79-79: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
357-361: Use ternary operator sampled = merged() if callable(merged) else merged
instead of if
-else
-block
(SIM108)
381-381: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
431-431: Local variable nall
is assigned to but never used
Remove assignment to unused variable nall
(F841)
deepmd/pd/model/descriptor/se_t_tebd.py
130-130: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
137-137: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
386-386: Local variable env_mat
is assigned to but never used
Remove assignment to unused variable env_mat
(F841)
451-451: Local variable nall
is assigned to but never used
Remove assignment to unused variable nall
(F841)
507-507: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
514-514: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
696-700: Use ternary operator sampled = merged() if callable(merged) else merged
instead of if
-else
-block
(SIM108)
720-720: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
770-770: Local variable nall
is assigned to but never used
Remove assignment to unused variable nall
(F841)
deepmd/pd/model/model/make_model.py
335-335: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
336-336: Use key not in dict
instead of key not in dict.keys()
Remove .keys()
(SIM118)
deepmd/pd/model/model/spin_model.py
209-209: Local variable entended_nlist
is assigned to but never used
Remove assignment to unused variable entended_nlist
(F841)
400-400: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
deepmd/pd/model/model/transform_output.py
29-29: Local variable faked_grad
is assigned to but never used
Remove assignment to unused variable faked_grad
(F841)
🔇 Additional comments (29)
deepmd/pd/model/atomic_model/pairtab_atomic_model.py (1)
476-498
: LGTM! Well-documented utility methods.
The utility methods are well-implemented with clear docstrings and appropriate handling of edge cases.
deepmd/pd/model/descriptor/repformers.py (1)
361-373
: 🛠️ Refactor suggestion
Consider removing redundant property definitions.
The properties dim_out
, dim_in
, and dim_emb
duplicate the functionality of their corresponding getter methods. Consider either:
- Using only properties and removing the getter methods, or
- Using only getter methods and removing the properties
If choosing option 1, apply this diff:
- def get_dim_out(self) -> int:
- """Returns the output dimension."""
- return self.dim_out
-
- def get_dim_in(self) -> int:
- """Returns the input dimension."""
- return self.dim_in
-
- def get_dim_emb(self) -> int:
- """Returns the embedding dimension g2."""
- return self.g2_dim
Likely invalid or redundant comment.
deepmd/pd/model/descriptor/dpa2.py (5)
77-310
: LGTM! Constructor is well-documented and properly initialized.
The constructor is well-structured with comprehensive documentation and proper parameter initialization.
🧰 Tools
🪛 Ruff
88-88: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
311-372
: LGTM! Getter methods are well-implemented.
The getter methods are concise, well-documented, and follow consistent patterns.
706-814
: LGTM! Forward method is well-implemented.
The forward method has:
- Clear parameter validation
- Proper error handling
- Comprehensive documentation
815-858
: LGTM! Update selection method is well-implemented.
The method properly handles:
- Selection updates
- Neighbor statistics
- Parameter validation
373-475
: Consider additional validation in parameter sharing.
While the implementation is correct, consider adding validation for:
- Compatibility of dimensions between shared parameters
- Consistency of hyperparameters between models being shared
deepmd/pd/model/descriptor/env_mat.py (2)
13-46
: Verify the correctness of the environment matrix calculation.
The _make_env_mat
function appears to correctly calculate the smooth environment matrix based on the provided atomic coordinates and neighbor lists. It handles edge cases, such as invalid neighbor indices, and applies the necessary protection to avoid division by zero.
However, it would be beneficial to:
- Add unit tests to ensure the correctness of the calculations under various scenarios.
- Verify that the chosen default values for
radial_only
andprotection
are appropriate for the intended use cases.
To verify the correctness of the environment matrix calculation, consider the following:
-
Generate a set of test cases with known input values and expected output values. These test cases should cover various scenarios, including edge cases and potential pitfalls.
-
Create unit tests using the Paddle testing framework to assert that the
_make_env_mat
function produces the expected output for each test case. -
Run the unit tests and ensure that they all pass. If any test fails, investigate the cause and make necessary corrections to the function.
-
Review the chosen default values for
radial_only
andprotection
. Consult with domain experts or refer to relevant literature to ensure that these values are appropriate for the intended use cases. If needed, adjust the default values or provide guidelines for users to choose suitable values based on their specific requirements.
By thoroughly testing the function and verifying the chosen default values, you can ensure the correctness and reliability of the environment matrix calculation.
84-86
: Verify the correctness of the normalization step.
The normalization of the environment matrix using the mean and standard deviation per atom type is a crucial step in the prod_env_mat
function. It is important to verify that this normalization is performed correctly and achieves the desired effect.
To verify the correctness of the normalization step:
-
Prepare a set of test cases with known input values for
_env_mat_se_a
,atype
,mean
, andstddev
. These test cases should cover various scenarios, including different atom types, edge cases, and potential numerical instabilities. -
Create unit tests that compare the output of the normalization step (
env_mat_se_a
) with the expected values for each test case. -
Run the unit tests and ensure that they all pass. If any test fails, investigate the cause and make necessary corrections to the normalization code.
-
Verify that the dimensions of
_env_mat_se_a
,t_avg
, andt_std
align correctly based on theatype
values. Ensure that the broadcasting oft_avg
andt_std
is performed as expected. -
Analyze the distribution of the normalized environment matrix values (
env_mat_se_a
) across different atom types and configurations. Check if the normalization brings the values to a consistent scale and removes any biases related to atom types. -
If possible, compare the results of the normalization step with those from other established implementations or reference values from the literature to validate the correctness of the approach.
By thoroughly verifying the normalization step, you can ensure that the prod_env_mat
function produces reliable and meaningful results for downstream tasks in the DeepMD framework.
deepmd/pd/model/model/transform_output.py (1)
233-234
: Verify the usage of .to(device=vv.place)
In PaddlePaddle, the method .to()
is not standard for moving tensors to devices. Instead, consider using .cuda()
or .cpu()
, or ensure that vv.place
is correctly specified for device placement.
Run the following script to check the validity of the .to()
method:
deepmd/pd/model/model/make_model.py (15)
1-67
: LGTM!
The imports and the make_model
function signature look good. The docstring provides a clear explanation of the purpose and usage of the function.
68-85
: LGTM!
The CM
class initialization is well-structured and properly handles the atomic_model_
parameter. The precision dictionaries and global precision settings are correctly initialized.
86-88
: LGTM!
The model_output_def
method correctly retrieves the output definition for the model using the atomic_output_def
method.
90-101
: LGTM!
The model_output_type
method correctly retrieves the output variable names based on their categories. The code handles the case where comprehension ifs are not supported in JIT mode.
103-171
: LGTM!
The forward_common
method is well-implemented and handles the model prediction process effectively. It takes care of input type casting, extending input and building neighbor lists, invoking the forward_common_lower
method for predictions, communicating extended output, and output type casting. The method is well-documented with clear parameter descriptions and return values.
173-177
: LGTM!
The get_out_bias
and set_out_bias
methods correctly retrieve and set the output bias of the atomic model.
179-205
: LGTM!
The change_out_bias
method is well-documented and handles changing the output bias of the atomic model based on the input data and the pretrained model. It supports different bias adjustment modes and properly delegates the task to the atomic model.
206-276
: LGTM!
The forward_common_lower
method is a lower-level interface that takes extended atomic coordinates, types, and neighbor lists as input and returns predictions on the extended region without reducing the output. It handles input type casting, neighbor list formatting, invoking the atomic model's forward_common_atomic
method, fitting the output to the model output, and output type casting. The method is well-documented with clear parameter descriptions and return values.
323-349
: LGTM!
The output_type_cast
method correctly converts the model output to the input precision. It handles the casting based on the output variable operation (reduction) and the input precision.
🧰 Tools
🪛 Ruff
335-335: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
336-336: Use key not in dict
instead of key not in dict.keys()
Remove .keys()
(SIM118)
351-400
: LGTM!
The format_nlist
method is well-implemented and handles formatting the neighbor list based on different scenarios. It takes care of padding, truncation, and sorting the neighbor list as needed. The method also handles the case of mixed types and distinguishes atom types in the neighbor list accordingly.
454-470
: LGTM!
The do_grad_r
and do_grad_c
methods correctly delegate the gradient checks to the atomic model. They handle the case when var_name
is None
and return the appropriate boolean value.
472-483
: LGTM!
The change_type_map
method correctly handles changing the type-related parameters based on the provided type_map
and the original model. It properly updates the statistics for new types if a model with new type statistics is provided.
485-490
: LGTM!
The serialize
and deserialize
methods correctly handle serializing and deserializing the atomic model.
492-566
: LGTM!
The various getter methods (get_dim_fparam
, get_dim_aparam
, get_sel_type
, is_aparam_nall
, get_rcut
, get_type_map
, get_nsel
, get_nnei
, atomic_output_def
, get_sel
, mixed_types
, has_message_passing
, need_sorted_nlist_for_lower
) correctly retrieve the corresponding properties from the atomic model.
568-587
: LGTM!
The forward
method directly calls the forward_common
method when no specific transform rule is required. It passes the appropriate arguments and returns the result.
deepmd/pd/model/model/spin_model.py (1)
398-399
: Verify the duplication of natoms[:, 2:]
in concatenation
In line 398, natoms[:, 2:]
is concatenated twice:
[2 * natoms[:, :2], natoms[:, 2:], natoms[:, 2:]]
This may lead to unintended duplication of data. Please verify if both instances of natoms[:, 2:]
are necessary or if one should be replaced with a different slice.
deepmd/pd/model/descriptor/se_atten.py (3)
922-922
:
Use Exception Instead of Assertion for Runtime Check
Similar to previous comments, using assert
for runtime checks may not be reliable in production environments. Replace the assert
with an explicit exception to ensure the check is always performed.
Apply this diff to replace the assert
with a ValueError
:
- assert input_r is not None, "input_r must be provided when dotr is True!"
+ if input_r is None:
+ raise ValueError("input_r must be provided when dotr is True!")
Likely invalid or redundant comment.
427-427
:
Use Exception Instead of Assertion for Runtime Check
Using assert
statements for runtime checks is not recommended, especially in production code, because assertions can be bypassed when Python is executed with optimization flags (-O
). Consider raising an appropriate exception to ensure that the check is always enforced.
Apply this diff to replace the assert
with a ValueError
:
- assert extended_atype_embd is not None
+ if extended_atype_embd is None:
+ raise ValueError("extended_atype_embd must be provided")
Likely invalid or redundant comment.
62-62
:
Avoid using mutable default arguments
The parameter neuron
uses a mutable default argument [25, 50, 100]
. It's advisable to set the default to None
and initialize within the method to prevent unexpected behaviors.
Apply this diff to fix the issue:
- neuron: list = [25, 50, 100],
+ neuron: Optional[list[int]] = None,
And initialize inside the __init__
method:
+ if neuron is None:
+ neuron = [25, 50, 100]
Don't forget to import Optional
from typing
if not already imported.
Likely invalid or redundant comment.
🧰 Tools
🪛 Ruff
62-62: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
def share_params(self, base_class, shared_level, resume=False): | ||
""" | ||
Share the parameters of self to the base_class with shared_level during multitask training. | ||
If not start from checkpoint (resume is False), | ||
some seperated parameters (e.g. mean and stddev) will be re-calculated across different classes. | ||
""" | ||
assert ( | ||
self.__class__ == base_class.__class__ | ||
), "Only descriptors of the same type can share params!" | ||
if shared_level == 0: | ||
# link buffers | ||
if hasattr(self, "mean"): | ||
if not resume: | ||
# in case of change params during resume | ||
base_env = EnvMatStatSe(base_class) | ||
base_env.stats = base_class.stats | ||
for kk in base_class.get_stats(): | ||
base_env.stats[kk] += self.get_stats()[kk] | ||
mean, stddev = base_env() | ||
if not base_class.set_davg_zero: | ||
paddle.assign( | ||
paddle.to_tensor(mean).to(device=env.DEVICE), | ||
base_class.mean, | ||
) # pylint: disable=no-explicit-dtype | ||
paddle.assign( | ||
paddle.to_tensor(stddev).to(device=env.DEVICE), | ||
base_class.stddev, | ||
) # pylint: disable=no-explicit-dtype | ||
# must share, even if not do stat | ||
self.mean = base_class.mean | ||
self.stddev = base_class.stddev | ||
# self.set_state_dict(base_class.state_dict()) # this does not work, because it only inits the model | ||
# the following will successfully link all the params except buffers | ||
for item in self._sub_layers: | ||
self._sub_layers[item] = base_class._sub_layers[item] | ||
else: | ||
raise NotImplementedError |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Several improvements needed in share_params implementation.
The current implementation has potential issues:
- No validation for shared_level parameter
- Magic number 0 used without explanation
- Potential race condition in parameter updates
- Missing validation of base_class parameters
Consider these improvements:
def share_params(self, base_class, shared_level, resume=False):
+ if not isinstance(shared_level, int) or shared_level < 0:
+ raise ValueError(f"shared_level must be a non-negative integer, got {shared_level}")
+ if not hasattr(base_class, 'stats'):
+ raise ValueError("base_class must have 'stats' attribute")
+
+ # Define sharing levels as constants
+ SHARE_ALL = 0 # Share all parameters including buffers
+
- if shared_level == 0:
+ if shared_level == SHARE_ALL:
# link buffers
if hasattr(self, "mean"):
if not resume:
# in case of change params during resume
base_env = EnvMatStatSe(base_class)
base_env.stats = base_class.stats
+ # Use atomic operations or locks if available
for kk in base_class.get_stats():
base_env.stats[kk] += self.get_stats()[kk]
Committable suggestion skipped: line range outside the PR's diff.
def extend_descrpt_stat(des, type_map, des_with_stat=None): | ||
r""" | ||
Extend the statistics of a descriptor block with types from newly provided `type_map`. | ||
|
||
After extending, the type related dimension of the extended statistics will have a length of | ||
`len(old_type_map) + len(type_map)`, where `old_type_map` represents the type map in `des`. | ||
The `get_index_between_two_maps()` function can then be used to correctly select statistics for types | ||
from `old_type_map` or `type_map`. | ||
Positive indices from 0 to `len(old_type_map) - 1` will select old statistics of types in `old_type_map`, | ||
while negative indices from `-len(type_map)` to -1 will select new statistics of types in `type_map`. | ||
|
||
Parameters | ||
---------- | ||
des : DescriptorBlock | ||
The descriptor block to be extended. | ||
type_map : List[str] | ||
The name of each type of atoms to be extended. | ||
des_with_stat : DescriptorBlock, Optional | ||
The descriptor block has additional statistics of types from newly provided `type_map`. | ||
If None, the default statistics will be used. | ||
Otherwise, the statistics provided in this DescriptorBlock will be used. | ||
|
||
""" | ||
if des_with_stat is not None: | ||
extend_davg = des_with_stat["davg"] | ||
extend_dstd = des_with_stat["dstd"] | ||
else: | ||
extend_shape = [len(type_map), *list(des["davg"].shape[1:])] | ||
extend_davg = paddle.zeros(extend_shape, dtype=des["davg"].dtype).to( | ||
device=des["davg"].place | ||
) | ||
extend_dstd = paddle.ones(extend_shape, dtype=des["dstd"].dtype).to( | ||
device=des["dstd"].place | ||
) | ||
des["davg"] = paddle.concat([des["davg"], extend_davg], axis=0) | ||
des["dstd"] = paddle.concat([des["dstd"], extend_dstd], axis=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add input validation and error handling in extend_descrpt_stat.
The function needs additional validation to ensure robust operation:
- Input parameter validation
- Shape compatibility checks
- Type consistency verification
Consider adding these validations:
def extend_descrpt_stat(des, type_map, des_with_stat=None):
+ if not isinstance(des, dict) or 'davg' not in des or 'dstd' not in des:
+ raise ValueError("des must be a dictionary containing 'davg' and 'dstd'")
+ if not isinstance(type_map, (list, tuple)) or not all(isinstance(t, str) for t in type_map):
+ raise ValueError("type_map must be a list of strings")
+
+ if des_with_stat is not None:
+ if not isinstance(des_with_stat, dict) or 'davg' not in des_with_stat or 'dstd' not in des_with_stat:
+ raise ValueError("des_with_stat must be a dictionary containing 'davg' and 'dstd'")
+ # Verify shape compatibility
+ if des_with_stat['davg'].shape[1:] != des['davg'].shape[1:]:
+ raise ValueError("Shape mismatch between des and des_with_stat")
+
if des_with_stat is not None:
extend_davg = des_with_stat["davg"]
extend_dstd = des_with_stat["dstd"]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def extend_descrpt_stat(des, type_map, des_with_stat=None): | |
r""" | |
Extend the statistics of a descriptor block with types from newly provided `type_map`. | |
After extending, the type related dimension of the extended statistics will have a length of | |
`len(old_type_map) + len(type_map)`, where `old_type_map` represents the type map in `des`. | |
The `get_index_between_two_maps()` function can then be used to correctly select statistics for types | |
from `old_type_map` or `type_map`. | |
Positive indices from 0 to `len(old_type_map) - 1` will select old statistics of types in `old_type_map`, | |
while negative indices from `-len(type_map)` to -1 will select new statistics of types in `type_map`. | |
Parameters | |
---------- | |
des : DescriptorBlock | |
The descriptor block to be extended. | |
type_map : List[str] | |
The name of each type of atoms to be extended. | |
des_with_stat : DescriptorBlock, Optional | |
The descriptor block has additional statistics of types from newly provided `type_map`. | |
If None, the default statistics will be used. | |
Otherwise, the statistics provided in this DescriptorBlock will be used. | |
""" | |
if des_with_stat is not None: | |
extend_davg = des_with_stat["davg"] | |
extend_dstd = des_with_stat["dstd"] | |
else: | |
extend_shape = [len(type_map), *list(des["davg"].shape[1:])] | |
extend_davg = paddle.zeros(extend_shape, dtype=des["davg"].dtype).to( | |
device=des["davg"].place | |
) | |
extend_dstd = paddle.ones(extend_shape, dtype=des["dstd"].dtype).to( | |
device=des["dstd"].place | |
) | |
des["davg"] = paddle.concat([des["davg"], extend_davg], axis=0) | |
des["dstd"] = paddle.concat([des["dstd"], extend_dstd], axis=0) | |
def extend_descrpt_stat(des, type_map, des_with_stat=None): | |
r""" | |
Extend the statistics of a descriptor block with types from newly provided `type_map`. | |
After extending, the type related dimension of the extended statistics will have a length of | |
`len(old_type_map) + len(type_map)`, where `old_type_map` represents the type map in `des`. | |
The `get_index_between_two_maps()` function can then be used to correctly select statistics for types | |
from `old_type_map` or `type_map`. | |
Positive indices from 0 to `len(old_type_map) - 1` will select old statistics of types in `old_type_map`, | |
while negative indices from `-len(type_map)` to -1 will select new statistics of types in `type_map`. | |
Parameters | |
---------- | |
des : DescriptorBlock | |
The descriptor block to be extended. | |
type_map : List[str] | |
The name of each type of atoms to be extended. | |
des_with_stat : DescriptorBlock, Optional | |
The descriptor block has additional statistics of types from newly provided `type_map`. | |
If None, the default statistics will be used. | |
Otherwise, the statistics provided in this DescriptorBlock will be used. | |
""" | |
if not isinstance(des, dict) or 'davg' not in des or 'dstd' not in des: | |
raise ValueError("des must be a dictionary containing 'davg' and 'dstd'") | |
if not isinstance(type_map, (list, tuple)) or not all(isinstance(t, str) for t in type_map): | |
raise ValueError("type_map must be a list of strings") | |
if des_with_stat is not None: | |
if not isinstance(des_with_stat, dict) or 'davg' not in des_with_stat or 'dstd' not in des_with_stat: | |
raise ValueError("des_with_stat must be a dictionary containing 'davg' and 'dstd'") | |
# Verify shape compatibility | |
if des_with_stat['davg'].shape[1:] != des['davg'].shape[1:]: | |
raise ValueError("Shape mismatch between des and des_with_stat") | |
if des_with_stat is not None: | |
extend_davg = des_with_stat["davg"] | |
extend_dstd = des_with_stat["dstd"] | |
else: | |
extend_shape = [len(type_map), *list(des["davg"].shape[1:])] | |
extend_davg = paddle.zeros(extend_shape, dtype=des["davg"].dtype).to( | |
device=des["davg"].place | |
) | |
extend_dstd = paddle.ones(extend_shape, dtype=des["dstd"].dtype).to( | |
device=des["dstd"].place | |
) | |
des["davg"] = paddle.concat([des["davg"], extend_davg], axis=0) | |
des["dstd"] = paddle.concat([des["dstd"], extend_dstd], axis=0) |
if self.tab_file is not None: | ||
( | ||
tab_info, | ||
tab_data, | ||
) = self.tab.get() # this returns -> tuple[np.array, np.array] | ||
nspline, ntypes_tab = tab_info[-2:].astype(int) | ||
self.register_buffer("tab_info", paddle.to_tensor(tab_info)) | ||
self.register_buffer( | ||
"tab_data", | ||
paddle.to_tensor(tab_data).reshape( | ||
[ntypes_tab, ntypes_tab, nspline, 4] | ||
), | ||
) | ||
if self.ntypes != ntypes_tab: | ||
raise ValueError( | ||
"The `type_map` provided does not match the number of columns in the table." | ||
) | ||
else: | ||
self.register_buffer("tab_info", None) | ||
self.register_buffer("tab_data", None) | ||
self.bias_atom_e = paddle.zeros( | ||
[self.ntypes, 1], dtype=env.GLOBAL_PD_ENER_FLOAT_PRECISION | ||
).to(device=env.DEVICE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify buffer registration logic.
The conditional buffer registration logic could be simplified by moving the shape calculation outside the condition and using None
as a fallback.
- if self.tab_file is not None:
- (
- tab_info,
- tab_data,
- ) = self.tab.get() # this returns -> tuple[np.array, np.array]
- nspline, ntypes_tab = tab_info[-2:].astype(int)
- self.register_buffer("tab_info", paddle.to_tensor(tab_info))
- self.register_buffer(
- "tab_data",
- paddle.to_tensor(tab_data).reshape(
- [ntypes_tab, ntypes_tab, nspline, 4]
- ),
- )
- if self.ntypes != ntypes_tab:
- raise ValueError(
- "The `type_map` provided does not match the number of columns in the table."
- )
- else:
- self.register_buffer("tab_info", None)
- self.register_buffer("tab_data", None)
+ tab_info = None
+ tab_data = None
+ if self.tab_file is not None:
+ tab_info, tab_data = self.tab.get()
+ nspline, ntypes_tab = tab_info[-2:].astype(int)
+ if self.ntypes != ntypes_tab:
+ raise ValueError(
+ "The `type_map` provided does not match the number of columns in the table."
+ )
+ tab_data = paddle.to_tensor(tab_data).reshape(
+ [ntypes_tab, ntypes_tab, nspline, 4]
+ )
+ tab_info = paddle.to_tensor(tab_info)
+ self.register_buffer("tab_info", tab_info)
+ self.register_buffer("tab_data", tab_data)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if self.tab_file is not None: | |
( | |
tab_info, | |
tab_data, | |
) = self.tab.get() # this returns -> tuple[np.array, np.array] | |
nspline, ntypes_tab = tab_info[-2:].astype(int) | |
self.register_buffer("tab_info", paddle.to_tensor(tab_info)) | |
self.register_buffer( | |
"tab_data", | |
paddle.to_tensor(tab_data).reshape( | |
[ntypes_tab, ntypes_tab, nspline, 4] | |
), | |
) | |
if self.ntypes != ntypes_tab: | |
raise ValueError( | |
"The `type_map` provided does not match the number of columns in the table." | |
) | |
else: | |
self.register_buffer("tab_info", None) | |
self.register_buffer("tab_data", None) | |
self.bias_atom_e = paddle.zeros( | |
[self.ntypes, 1], dtype=env.GLOBAL_PD_ENER_FLOAT_PRECISION | |
).to(device=env.DEVICE) | |
tab_info = None | |
tab_data = None | |
if self.tab_file is not None: | |
tab_info, tab_data = self.tab.get() | |
nspline, ntypes_tab = tab_info[-2:].astype(int) | |
if self.ntypes != ntypes_tab: | |
raise ValueError( | |
"The `type_map` provided does not match the number of columns in the table." | |
) | |
tab_data = paddle.to_tensor(tab_data).reshape( | |
[ntypes_tab, ntypes_tab, nspline, 4] | |
) | |
tab_info = paddle.to_tensor(tab_info) | |
self.register_buffer("tab_info", tab_info) | |
self.register_buffer("tab_data", tab_data) | |
self.bias_atom_e = paddle.zeros( | |
[self.ntypes, 1], dtype=env.GLOBAL_PD_ENER_FLOAT_PRECISION | |
).to(device=env.DEVICE) |
update_residual_init: str = "norm", | ||
set_davg_zero: bool = True, | ||
smooth: bool = True, | ||
exclude_types: list[tuple[int, int]] = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument in constructor.
Using mutable default arguments like empty lists can lead to unexpected behavior as the default list is shared across all instances.
Apply this diff to fix the issue:
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
And update the initialization inside the __init__
method:
if exclude_types is None:
exclude_types = []
🧰 Tools
🪛 Ruff
99-99: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
|
||
def reinit_exclude( | ||
self, | ||
exclude_types: list[tuple[int, int]] = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument in reinit_exclude
method.
Similar to the constructor, using a mutable default argument can cause unexpected behavior.
Apply this diff to fix the issue:
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
And update the initialization inside the method:
if exclude_types is None:
exclude_types = []
🧰 Tools
🪛 Ruff
377-377: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
.to(device=nlist.place) | ||
.reshape([1, -1, 1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the use of 'device' parameter in .to()
method calls
In the specified lines, the .to()
method is called with the device
parameter, which is not appropriate in PaddlePaddle. The correct parameter should be place
. Please replace device=
with place=
to ensure proper functionality.
Apply the following changes to fix the syntax:
- .to(device=nlist.place)
+ .to(place=nlist.place)
- ).to(device=extended_tensor.place)
+ ).to(place=extended_tensor.place)
- ).to(device=aparam.place),
+ ).to(place=aparam.place),
Also applies to: 229-229, 253-253
tmp_dict["natoms"] = paddle.concat( | ||
[2 * natoms[:, :2], natoms[:, 2:], natoms[:, 2:]], axis=-1 | ||
) | ||
for item_key in sys.keys(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify iteration by removing .keys()
from dictionary traversal
In line 400, you iterate over the dictionary keys using sys.keys()
. In Python, it's more idiomatic and efficient to iterate directly over the dictionary:
for item_key in sys:
This change simplifies the code and improves readability.
Apply this diff to update the code:
- for item_key in sys.keys():
+ for item_key in sys:
🧰 Tools
🪛 Ruff
400-400: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
elif self.tebd_input_mode in ["strip"]: | ||
# nfnl x nnei x ng | ||
gg_s = self.filter_layers.networks[0](ss) | ||
assert self.filter_layers_strip is not None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use Exception Instead of Assertion for Input Validation
Using assert
statements for input validation is not recommended in production code because assertions can be disabled with optimization flags (-O
). Consider raising a NotImplementedError
or ValueError
to handle this case explicitly.
Apply this diff to replace the assert
with an exception:
- assert self.filter_layers_strip is not None
+ if self.filter_layers_strip is None:
+ raise NotImplementedError("filter_layers_strip is not initialized for tebd_input_mode 'strip'.")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
assert self.filter_layers_strip is not None | |
if self.filter_layers_strip is None: | |
raise NotImplementedError("filter_layers_strip is not initialized for tebd_input_mode 'strip'.") |
|
||
def reinit_exclude( | ||
self, | ||
exclude_types: list[tuple[int, int]] = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using mutable default arguments
The parameter exclude_types
in reinit_exclude
method uses a mutable default argument []
. It's advisable to set the default to None
and initialize within the method to prevent unexpected behaviors.
Apply this diff to fix the issue:
- def reinit_exclude(
- self,
- exclude_types: list[tuple[int, int]] = [],
- ):
+ def reinit_exclude(
+ self,
+ exclude_types: Optional[list[tuple[int, int]]] = None,
+ ):
+ if exclude_types is None:
+ exclude_types = []
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
exclude_types: list[tuple[int, int]] = [], | |
def reinit_exclude( | |
self, | |
exclude_types: Optional[list[tuple[int, int]]] = None, | |
): | |
if exclude_types is None: | |
exclude_types = [] |
🧰 Tools
🪛 Ruff
381-381: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
temperature=None, | ||
smooth: bool = True, | ||
type_one_side: bool = False, | ||
exclude_types: list[tuple[int, int]] = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid using mutable default arguments
The parameter exclude_types
uses a mutable default argument []
. Using mutable default arguments can lead to unexpected behaviors. It's recommended to use None
as the default value and initialize the list within the function.
Apply this diff to fix the issue:
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
And initialize inside the __init__
method:
+ if exclude_types is None:
+ exclude_types = []
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff
79-79: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 18
🧹 Outside diff range and nitpick comments (15)
deepmd/pd/model/atomic_model/pairtab_atomic_model.py (2)
38-47
: Add a warning note about the lack of energy smoothing.The docstring mentions that the model doesn't smooth the energy at the cutoff radius, which could lead to discontinuities. Consider adding a warning note to make this limitation more prominent.
Apply this diff to enhance the documentation:
This model can be used to tabulate the pairwise energy between atoms for either short-range or long-range interactions, such as D3, LJ, ZBL, etc. It should not be used alone, but rather as one submodel of a linear (sum) model, such as DP+D3. + Warning + ------- + This model does not smooth the energy at the cutoff radius. Users must ensure + that the energy has been smoothed to zero to avoid discontinuities. + Do not put the model on the first model of a linear model, since the linear model fetches the type map from the first model. - At this moment, the model does not smooth the energy at the cutoff radius, so - one needs to make sure the energy has been smoothed to zero.
342-343
: Improve error message for boundary check.The current error message for the boundary check is not descriptive enough. It should include the actual values that caused the error to aid in debugging.
Apply this diff to enhance the error message:
if paddle.any(uu < 0): - raise Exception("coord go beyond table lower boundary") + raise ValueError( + f"Coordinates go beyond the table's lower boundary (rmin={self.tab_info[0]}). " + "This typically occurs when atoms are too close to each other." + )deepmd/pd/model/descriptor/repformers.py (1)
450-450
: Rename unused loop variable.The loop control variable
idx
is not used within the loop body.Apply this diff to improve code clarity:
- for idx, ll in enumerate(self.layers): + for _, ll in enumerate(self.layers):🧰 Tools
🪛 Ruff
450-450: Loop control variable
idx
not used within loop bodyRename unused
idx
to_idx
(B007)
deepmd/pd/model/model/make_model.py (2)
90-101
: Consider using list comprehension for better readability.The current implementation uses a loop to build the output type list. Consider using a more concise list comprehension:
- vars: list[str] = [] - for kk, vv in var_defs.items(): - if vv.category == OutputVariableCategory.OUT.value: - vars.append(kk) - return vars + return [kk for kk, vv in var_defs.items() + if vv.category == OutputVariableCategory.OUT.value]
335-336
: Optimize dictionary key membership testing.Use direct dictionary membership testing instead of checking keys:
- if kk not in odef.keys(): + if kk not in odef:🧰 Tools
🪛 Ruff
335-335: Use
key in dict
instead ofkey in dict.keys()
Remove
.keys()
(SIM118)
336-336: Use
key not in dict
instead ofkey not in dict.keys()
Remove
.keys()
(SIM118)
deepmd/pd/model/descriptor/se_t_tebd.py (2)
814-818
: Add detailed comments for tensor operationsThe tensor operations replacing einsum are harder to understand. Consider adding detailed comments explaining the shape transformations and mathematical operations.
For example:
- # ij1m x i1km -> ijkm -> ijk + # Replace einsum("ijm,ikm->ijk", rr_i, rr_j) with explicit operations + # 1. unsqueeze adds dimensions for broadcasting + # 2. multiplication performs the dot product + # 3. sum reduces the last dimension rr_i.unsqueeze(2) * rr_j.unsqueeze(1)Also applies to: 854-858
766-770
: Enhance input validation with descriptive error messagesThe current input validation is minimal. Consider adding comprehensive shape checks with descriptive error messages to help users diagnose issues.
Example enhancement:
assert extended_atype_embd is not None + if extended_atype_embd.shape[-1] != self.tebd_dim: + raise ValueError( + f"Expected last dimension of extended_atype_embd to be {self.tebd_dim}, " + f"but got {extended_atype_embd.shape[-1]}" + ) nframes, nloc, nnei = nlist.shape atype = extended_atype[:, :nloc] nb = nframes🧰 Tools
🪛 Ruff
770-770: Local variable
nall
is assigned to but never usedRemove assignment to unused variable
nall
(F841)
deepmd/pd/model/descriptor/se_atten.py (3)
71-71
: Avoid using string literal as default argumentThe parameter
activation_function
uses a string literal as default argument without an explicit value. This could lead to confusion about the default value.Apply this diff to make the default value explicit:
- activation_function="tanh", + activation_function: str = "tanh",
502-518
: Add comments explaining tensor operations and transformationsThis section performs complex tensor operations including normalization, matrix multiplication, and reshaping. Consider adding inline comments explaining:
- The purpose of each transformation
- The expected shape at each step
- Why these specific operations are needed
660-675
: Enhance error handling in deserialization methodThe
deserialize
method should validate the required fields before attempting to use them. Consider adding validation for required fields and proper error messages.Example improvement:
@classmethod def deserialize(cls, data: dict) -> "NeighborGatedAttention": data = data.copy() + required_fields = ["attention_layers", "@version", "@class"] + for field in required_fields: + if field not in data: + raise ValueError(f"Missing required field: {field}") check_version_compatibility(data.pop("@version"), 1, 1) data.pop("@class") attention_layers = data.pop("attention_layers")deepmd/pd/model/descriptor/repformer_layer.py (2)
1156-1159
: Simplify conditional assignmentThe if-else block can be simplified using a ternary operator for better readability.
Apply this diff to simplify the code:
-if cal_gg1: - gg1 = _make_nei_g1(g1_ext, nlist) -else: - gg1 = None +gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None🧰 Tools
🪛 Ruff
1156-1159: Use ternary operator
gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
instead ofif
-else
-blockReplace
if
-else
-block withgg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
(SIM108)
580-844
: Consider architectural improvementsThe RepformerLayer class is quite complex and could benefit from some architectural improvements:
- Consider breaking down the class into smaller, more focused components to improve maintainability.
- The update flags (update_g1_has_conv, update_g2_has_attn, etc.) could be encapsulated in a configuration object.
- The residual handling could be extracted into a separate strategy pattern.
Would you like assistance in refactoring the class to improve its architecture?
deepmd/pd/model/descriptor/env_mat.py (2)
26-26
: Remove commented-out code for clarityThe commented-out line
# nlist = nlist * mask
can be removed to improve code readability. Since this implementation leads to NaNs in Hessian calculations and is not used, removing it avoids confusion.
30-31
: Add explanation for usingdecomp
functions over Paddle equivalentsLines 30 and 34 comment out the Paddle implementations in favor of
decomp
functions:
paddle.take_along_axis
is replaced withdecomp.take_along_axis
.paddle.linalg.norm
is replaced withdecomp.norm
.Providing a brief comment or documentation about why
decomp
functions are preferred here would enhance maintainability and help future developers understand the rationale.deepmd/pd/model/model/spin_model.py (1)
400-400
: Simplify the loop by iterating directly over the dictionary.Instead of iterating over
sys.keys()
, you can iterate directly oversys
. This makes the code more Pythonic and efficient.Apply this diff to simplify the loop:
- for item_key in sys.keys(): + for item_key in sys:🧰 Tools
🪛 Ruff
400-400: Use
key in dict
instead ofkey in dict.keys()
Remove
.keys()
(SIM118)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (13)
deepmd/pd/loss/ener.py
(1 hunks)deepmd/pd/model/atomic_model/pairtab_atomic_model.py
(1 hunks)deepmd/pd/model/descriptor/descriptor.py
(1 hunks)deepmd/pd/model/descriptor/dpa2.py
(1 hunks)deepmd/pd/model/descriptor/env_mat.py
(1 hunks)deepmd/pd/model/descriptor/gaussian_lcc.py
(1 hunks)deepmd/pd/model/descriptor/repformer_layer.py
(1 hunks)deepmd/pd/model/descriptor/repformers.py
(1 hunks)deepmd/pd/model/descriptor/se_atten.py
(1 hunks)deepmd/pd/model/descriptor/se_t_tebd.py
(1 hunks)deepmd/pd/model/model/make_model.py
(1 hunks)deepmd/pd/model/model/spin_model.py
(1 hunks)deepmd/pd/model/model/transform_output.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- deepmd/pd/loss/ener.py
- deepmd/pd/model/descriptor/descriptor.py
- deepmd/pd/model/descriptor/gaussian_lcc.py
🧰 Additional context used
🪛 Ruff
deepmd/pd/model/descriptor/dpa2.py
88-88: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
509-509: Loop control variable ii
not used within loop body
Rename unused ii
to _ii
(B007)
694-694: Local variable env_mat
is assigned to but never used
Remove assignment to unused variable env_mat
(F841)
deepmd/pd/model/descriptor/repformer_layer.py
967-967: Local variable ng2
is assigned to but never used
Remove assignment to unused variable ng2
(F841)
1156-1159: Use ternary operator gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
instead of if
-else
-block
Replace if
-else
-block with gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
(SIM108)
1277-1277: Local variable nitem
is assigned to but never used
Remove assignment to unused variable nitem
(F841)
deepmd/pd/model/descriptor/repformers.py
99-99: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
377-377: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
450-450: Loop control variable idx
not used within loop body
Rename unused idx
to _idx
(B007)
528-532: Use ternary operator sampled = merged() if callable(merged) else merged
instead of if
-else
-block
(SIM108)
deepmd/pd/model/descriptor/se_atten.py
62-62: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
79-79: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
357-361: Use ternary operator sampled = merged() if callable(merged) else merged
instead of if
-else
-block
(SIM108)
381-381: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
431-431: Local variable nall
is assigned to but never used
Remove assignment to unused variable nall
(F841)
deepmd/pd/model/descriptor/se_t_tebd.py
130-130: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
137-137: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
386-386: Local variable env_mat
is assigned to but never used
Remove assignment to unused variable env_mat
(F841)
451-451: Local variable nall
is assigned to but never used
Remove assignment to unused variable nall
(F841)
507-507: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
514-514: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
696-700: Use ternary operator sampled = merged() if callable(merged) else merged
instead of if
-else
-block
(SIM108)
720-720: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
770-770: Local variable nall
is assigned to but never used
Remove assignment to unused variable nall
(F841)
deepmd/pd/model/model/make_model.py
335-335: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
336-336: Use key not in dict
instead of key not in dict.keys()
Remove .keys()
(SIM118)
deepmd/pd/model/model/spin_model.py
209-209: Local variable entended_nlist
is assigned to but never used
Remove assignment to unused variable entended_nlist
(F841)
400-400: Use key in dict
instead of key in dict.keys()
Remove .keys()
(SIM118)
deepmd/pd/model/model/transform_output.py
29-29: Local variable faked_grad
is assigned to but never used
Remove assignment to unused variable faked_grad
(F841)
🔇 Additional comments (19)
deepmd/pd/model/atomic_model/pairtab_atomic_model.py (1)
476-498
: LGTM! Well-documented utility methods.
The utility methods are well-structured, properly documented, and follow type hinting best practices.
deepmd/pd/model/descriptor/repformers.py (3)
296-374
: LGTM! Well-documented getter methods.
The getter methods are well-implemented with clear documentation and single responsibilities.
542-556
: LGTM! Well-implemented utility methods.
The utility methods are well-implemented with proper error handling and clear documentation.
528-532
: 🛠️ Refactor suggestion
Simplify assignment using a ternary operator.
The assignment to sampled
can be simplified using a ternary operator for clarity and conciseness.
Apply this diff to refactor the code:
- if callable(merged):
- # only get data for once
- sampled = merged()
- else:
- sampled = merged
+ sampled = merged() if callable(merged) else merged
Likely invalid or redundant comment.
🧰 Tools
🪛 Ruff
528-532: Use ternary operator sampled = merged() if callable(merged) else merged
instead of if
-else
-block
(SIM108)
deepmd/pd/model/model/make_model.py (5)
45-67
: LGTM: Well-documented factory function with clear purpose and return type.
The factory function is well-structured with comprehensive docstrings explaining its purpose and parameters.
104-172
: LGTM: Well-implemented forward method with proper documentation and error handling.
The method properly handles optional parameters and follows a clear flow of operations.
173-205
: LGTM: Clean delegation of bias handling to atomic model.
The methods properly delegate bias handling operations to the atomic model with clear documentation.
351-452
: LGTM: Comprehensive neighbor list handling with proper edge cases.
The implementation properly handles various scenarios for neighbor list sizes and formatting.
453-587
: LGTM: Well-implemented utility methods with clear documentation.
The gradient, type mapping, serialization, and utility methods are properly implemented with comprehensive documentation.
deepmd/pd/model/descriptor/se_atten.py (1)
1-52
: LGTM! Well-organized imports and file structure.
The imports are properly organized, grouped logically, and all are being used in the code.
deepmd/pd/model/descriptor/repformer_layer.py (4)
155-577
: Well-structured attention mechanism implementation
The attention-related classes (Atten2Map, Atten2MultiHeadApply, Atten2EquiVarApply, LocalAtten) are well-implemented with:
- Clear separation of concerns
- Proper error handling and input validation
- Comprehensive documentation
- Type hints for better code understanding
- Consistent serialization/deserialization methods
1106-1254
: Verify tensor operations and numerical stability
The forward method involves multiple tensor operations that could impact memory usage and numerical stability:
- Memory efficiency:
- Multiple tensor concatenations and splits
- Creation of intermediate tensors in attention calculations
- Numerical stability:
- Division operations with small epsilon values
- Softmax operations on potentially large values
Run the following script to check for potential numerical stability issues:
✅ Verification successful
Numerical stability measures are properly implemented
The code review revealed that the numerical stability concerns are adequately addressed:
- The epsilon value of 1e-4 used in the code is a safe threshold for preventing division by zero issues
- The softmax operations include proper masking to handle invalid values:
attnw = paddle.nn.functional.softmax(attnw, axis=-1) attnw = attnw.masked_fill(attnw_mask, 0.0)
Regarding memory efficiency:
- The tensor operations follow standard practices with proper masking
- The intermediate tensors are managed through lists (g2_update, h2_update, g1_update) which allows for garbage collection
- The concatenations are performed only when necessary and on the last dimension which is memory efficient
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for potential numerical stability issues in tensor operations
# Test 1: Check for division operations with small epsilon values
rg -A 2 'epsilon.*=.*1e-[4-9]'
# Test 2: Check for potential numerical instability in softmax operations
rg -A 5 'paddle\.nn\.functional\.softmax'
Length of output: 10822
🧰 Tools
🪛 Ruff
1156-1159: Use ternary operator gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
instead of if
-else
-block
Replace if
-else
-block with gg1 = _make_nei_g1(g1_ext, nlist) if cal_gg1 else None
(SIM108)
897-927
:
Remove unused variable ng2
The variable ng2
is assigned but never used in the method. Consider removing it to improve code cleanliness.
Apply this diff to remove the unused variable:
def _update_g1_conv(
self,
gg1: paddle.Tensor,
g2: paddle.Tensor,
nlist_mask: paddle.Tensor,
sw: paddle.Tensor,
) -> paddle.Tensor:
assert self.proj_g1g2 is not None
nb, nloc, nnei, _ = g2.shape
ng1 = gg1.shape[-1]
- ng2 = g2.shape[-1]
if not self.g1_out_conv:
# gg1 : nb x nloc x nnei x ng2
gg1 = self.proj_g1g2(gg1).reshape([nb, nloc, nnei, ng2])
Likely invalid or redundant comment.
85-116
:
Remove unused variable ng1
The variable ng1
is assigned but never used in the function. Consider removing it to improve code cleanliness.
Apply this diff to remove the unused variable:
def _make_nei_g1(
g1_ext: paddle.Tensor,
nlist: paddle.Tensor,
) -> paddle.Tensor:
# nlist: nb x nloc x nnei
nb, nloc, nnei = nlist.shape
# g1_ext: nb x nall x ng1
- ng1 = g1_ext.shape[-1]
# index: nb x (nloc x nnei) x ng1
index = nlist.reshape([nb, nloc * nnei]).unsqueeze(-1).expand([-1, -1, g1_ext.shape[-1]])
# gg1 : nb x (nloc x nnei) x ng1
gg1 = decomp.take_along_axis(g1_ext, axis=1, indices=index)
# gg1 : nb x nloc x nnei x ng1
gg1 = gg1.reshape([nb, nloc, nnei, ng1])
return gg1
Likely invalid or redundant comment.
deepmd/pd/model/descriptor/env_mat.py (2)
38-39
: Verify the impact of the 'protection' parameter on division operations
The variables t0
and t1
are computed using length + protection
to prevent division by zero:
t0 = 1 / (length + protection)
t1 = diff / (length + protection) ** 2
Please verify that adding protection
does not adversely affect the accuracy of the calculations, especially when length
is very small. An alternative might be to use a small epsilon value or apply a conditional check.
Would you like assistance in verifying the numerical stability of these computations?
1-1
: License declaration is correctly specified
The SPDX license identifier is properly included at the beginning of the file, ensuring compliance with licensing requirements.
deepmd/pd/model/model/spin_model.py (3)
209-211
: Variable entended_nlist
is assigned but never used.
The variable entended_nlist
is assigned a value but is not utilized afterwards. Additionally, there is a typo in the variable name (entended_nlist
instead of extended_nlist
). This could lead to confusion and potential bugs.
🧰 Tools
🪛 Ruff
209-209: Local variable entended_nlist
is assigned to but never used
Remove assignment to unused variable entended_nlist
(F841)
348-360
:
Ensure model_output_def
updates the correct output definition.
In the model_output_def
method, modifying backbone_model_atomic_output_def[var_name].magnetic
directly may unintentionally affect the original object's state due to shallow copying. To avoid this, consider using deepcopy
when copying the output definitions.
Apply this diff to create a deep copy of the output definitions:
- backbone_model_atomic_output_def = self.backbone_model.atomic_output_def()
+ backbone_model_atomic_output_def = deepcopy(self.backbone_model.atomic_output_def())
Likely invalid or redundant comment.
152-153
: Ensure proper handling of boolean logic and indexing.
Double-check the logic used in these sections to ensure that the boolean masks and indexing are correctly applied, preventing potential off-by-one errors or index out-of-range exceptions.
Run the following script to verify the shapes and indices:
Also applies to: 233-236, 509-515
✅ Verification successful
Let me analyze the code further to verify the boolean logic and indexing concerns.
Based on the analysis of the code, I can now generate the final response:
Boolean masks and indexing operations are properly implemented
The implementation shows correct handling of boolean masks and tensor shapes:
spin_mask
is properly initialized from the spin configuration- Device/place conversion is handled correctly with
.to()
calls - Tensor reshaping operations maintain consistent dimensions
- Indexing operations are properly aligned with the documented tensor structure
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify tensor shapes and indices in the codebase.
# Test: Search for lines where boolean masks are used with tensors.
rg ">\s*0\.0" deepmd/pd/model/model/spin_model.py -A 2 -B 2
# Test: Search for indexing operations to check for potential issues.
rg "\[[^\]]+\]" deepmd/pd/model/model/spin_model.py
Length of output: 7547
Script:
#!/bin/bash
# Let's examine the spin_mask initialization and usage
ast-grep --pattern 'self.spin_mask'
# Check the shape and type handling
rg "virtual_scale_mask.*to\(" deepmd/pd/model/model/spin_model.py -B 5 -A 5
# Look for any error handling or assertions related to shapes
rg "assert.*shape" deepmd/pd/model/model/spin_model.py
Length of output: 3573
if self.tab_file is not None: | ||
( | ||
tab_info, | ||
tab_data, | ||
) = self.tab.get() # this returns -> tuple[np.array, np.array] | ||
nspline, ntypes_tab = tab_info[-2:].astype(int) | ||
self.register_buffer("tab_info", paddle.to_tensor(tab_info)) | ||
self.register_buffer( | ||
"tab_data", | ||
paddle.to_tensor(tab_data).reshape( | ||
[ntypes_tab, ntypes_tab, nspline, 4] | ||
), | ||
) | ||
if self.ntypes != ntypes_tab: | ||
raise ValueError( | ||
"The `type_map` provided does not match the number of columns in the table." | ||
) | ||
else: | ||
self.register_buffer("tab_info", None) | ||
self.register_buffer("tab_data", None) | ||
self.bias_atom_e = paddle.zeros( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Refactor buffer registration for better readability.
The conditional buffer registration logic can be simplified by extracting the common buffer registration code into a separate method.
Consider refactoring the code as follows:
# handle deserialization with no input file
+ def _register_tab_buffers(self, tab_info=None, tab_data=None):
+ self.register_buffer("tab_info", paddle.to_tensor(tab_info) if tab_info is not None else None)
+ self.register_buffer("tab_data", paddle.to_tensor(tab_data).reshape(
+ [tab_info[-1], tab_info[-1], tab_info[-2], 4]
+ ) if tab_data is not None else None)
+
if self.tab_file is not None:
(
tab_info,
tab_data,
) = self.tab.get() # this returns -> tuple[np.array, np.array]
nspline, ntypes_tab = tab_info[-2:].astype(int)
- self.register_buffer("tab_info", paddle.to_tensor(tab_info))
- self.register_buffer(
- "tab_data",
- paddle.to_tensor(tab_data).reshape(
- [ntypes_tab, ntypes_tab, nspline, 4]
- ),
- )
+ self._register_tab_buffers(tab_info, tab_data)
if self.ntypes != ntypes_tab:
raise ValueError(
"The `type_map` provided does not match the number of columns in the table."
)
else:
- self.register_buffer("tab_info", None)
- self.register_buffer("tab_data", None)
+ self._register_tab_buffers()
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if self.tab_file is not None: | |
( | |
tab_info, | |
tab_data, | |
) = self.tab.get() # this returns -> tuple[np.array, np.array] | |
nspline, ntypes_tab = tab_info[-2:].astype(int) | |
self.register_buffer("tab_info", paddle.to_tensor(tab_info)) | |
self.register_buffer( | |
"tab_data", | |
paddle.to_tensor(tab_data).reshape( | |
[ntypes_tab, ntypes_tab, nspline, 4] | |
), | |
) | |
if self.ntypes != ntypes_tab: | |
raise ValueError( | |
"The `type_map` provided does not match the number of columns in the table." | |
) | |
else: | |
self.register_buffer("tab_info", None) | |
self.register_buffer("tab_data", None) | |
self.bias_atom_e = paddle.zeros( | |
if self.tab_file is not None: | |
def _register_tab_buffers(self, tab_info=None, tab_data=None): | |
self.register_buffer("tab_info", paddle.to_tensor(tab_info) if tab_info is not None else None) | |
self.register_buffer("tab_data", paddle.to_tensor(tab_data).reshape( | |
[tab_info[-1], tab_info[-1], tab_info[-2], 4] | |
) if tab_data is not None else None) | |
if self.tab_file is not None: | |
( | |
tab_info, | |
tab_data, | |
) = self.tab.get() # this returns -> tuple[np.array, np.array] | |
nspline, ntypes_tab = tab_info[-2:].astype(int) | |
self._register_tab_buffers(tab_info, tab_data) | |
if self.ntypes != ntypes_tab: | |
raise ValueError( | |
"The `type_map` provided does not match the number of columns in the table." | |
) | |
else: | |
self._register_tab_buffers() | |
self.bias_atom_e = paddle.zeros( |
update_residual_init: str = "norm", | ||
set_davg_zero: bool = True, | ||
smooth: bool = True, | ||
exclude_types: list[tuple[int, int]] = [], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument in constructor.
Using mutable default arguments like empty lists can lead to unexpected behavior as the default list is shared across all instances.
Apply this diff to fix the issue:
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
And update the initialization inside the __init__
method:
if exclude_types is None:
exclude_types = []
self.reinit_exclude(exclude_types)
🧰 Tools
🪛 Ruff
99-99: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
def reinit_exclude( | ||
self, | ||
exclude_types: list[tuple[int, int]] = [], | ||
): | ||
self.exclude_types = exclude_types | ||
self.emask = PairExcludeMask(self.ntypes, exclude_types=exclude_types) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mutable default argument in reinit_exclude
method.
Similar to the constructor, using a mutable default argument can cause unexpected behavior.
Apply this diff to fix the issue:
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
And update the initialization inside the method:
if exclude_types is None:
exclude_types = []
self.exclude_types = exclude_types
self.emask = PairExcludeMask(self.ntypes, exclude_types=exclude_types)
🧰 Tools
🪛 Ruff
377-377: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
def forward_common_lower( | ||
self, | ||
extended_coord, | ||
extended_atype, | ||
nlist, | ||
mapping: Optional[paddle.Tensor] = None, | ||
fparam: Optional[paddle.Tensor] = None, | ||
aparam: Optional[paddle.Tensor] = None, | ||
do_atomic_virial: bool = False, | ||
comm_dict: Optional[dict[str, paddle.Tensor]] = None, | ||
extra_nlist_sort: bool = False, | ||
): | ||
"""Return model prediction. Lower interface that takes | ||
extended atomic coordinates and types, nlist, and mapping | ||
as input, and returns the predictions on the extended region. | ||
The predictions are not reduced. | ||
|
||
Parameters | ||
---------- | ||
extended_coord | ||
coodinates in extended region. nf x (nall x 3) | ||
extended_atype | ||
atomic type in extended region. nf x nall | ||
nlist | ||
neighbor list. nf x nloc x nsel. | ||
mapping | ||
mapps the extended indices to local indices. nf x nall. | ||
fparam | ||
frame parameter. nf x ndf | ||
aparam | ||
atomic parameter. nf x nloc x nda | ||
do_atomic_virial | ||
whether calculate atomic virial. | ||
comm_dict | ||
The data needed for communication for parallel inference. | ||
extra_nlist_sort | ||
whether to forcibly sort the nlist. | ||
|
||
Returns | ||
------- | ||
result_dict | ||
the result dict, defined by the `FittingOutputDef`. | ||
|
||
""" | ||
nframes, nall = extended_atype.shape[:2] | ||
extended_coord = extended_coord.reshape([nframes, -1, 3]) | ||
nlist = self.format_nlist( | ||
extended_coord, extended_atype, nlist, extra_nlist_sort=extra_nlist_sort | ||
) | ||
cc_ext, _, fp, ap, input_prec = self.input_type_cast( | ||
extended_coord, fparam=fparam, aparam=aparam | ||
) | ||
del extended_coord, fparam, aparam | ||
atomic_ret = self.atomic_model.forward_common_atomic( | ||
cc_ext, | ||
extended_atype, | ||
nlist, | ||
mapping=mapping, | ||
fparam=fp, | ||
aparam=ap, | ||
comm_dict=comm_dict, | ||
) | ||
model_predict = fit_output_to_model_output( | ||
atomic_ret, | ||
self.atomic_output_def(), | ||
cc_ext, | ||
do_atomic_virial=do_atomic_virial, | ||
create_graph=self.training, | ||
) | ||
model_predict = self.output_type_cast(model_predict, input_prec) | ||
return model_predict | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting shape transformation logic.
The method handles both coordinate reshaping and model prediction. Consider extracting the shape transformation logic into a separate method for better maintainability:
+ def _reshape_coordinates(self, extended_coord: paddle.Tensor) -> paddle.Tensor:
+ """Reshape coordinates to the expected format."""
+ nframes = extended_coord.shape[0]
+ return extended_coord.reshape([nframes, -1, 3])
+
def forward_common_lower(self, ...):
- extended_coord = extended_coord.reshape([nframes, -1, 3])
+ extended_coord = self._reshape_coordinates(extended_coord)
Committable suggestion skipped: line range outside the PR's diff.
rcut_smth: float, | ||
sel: Union[list[int], int], | ||
ntypes: int, | ||
neuron: list = [2, 4, 8], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace mutable default arguments with None
Using mutable objects (lists) as default arguments can lead to unexpected behavior when the same default list is shared across multiple instances. Instead, initialize these arguments as None
and create the list within the function.
Apply these changes:
- neuron: list = [2, 4, 8],
+ neuron: Optional[list] = None,
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
- neuron: list = [25, 50, 100],
+ neuron: Optional[list] = None,
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
- exclude_types: list[tuple[int, int]] = [],
+ exclude_types: Optional[list[tuple[int, int]]] = None,
Then initialize within the functions:
def __init__(self, ...):
neuron = [2, 4, 8] if neuron is None else neuron
exclude_types = [] if exclude_types is None else exclude_types
Also applies to: 137-137, 507-507, 514-514, 720-720
🧰 Tools
🪛 Ruff
130-130: Do not use mutable data structures for argument defaults
Replace with None
; initialize within function
(B006)
.to(device=nlist.place) | ||
.reshape([1, -1, 1]) | ||
.expand([nframes, -1, -1]) | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct the use of .to(device=...)
method in PaddlePaddle.
The .to(device=...)
method is not available in PaddlePaddle; it's specific to PyTorch. In PaddlePaddle, to specify the device for a tensor, you should use paddle.to_tensor()
with the place
argument or tensor.cuda()
/tensor.cpu()
. For moving existing tensors, you can use tensor.cuda()
or tensor.cpu()
.
Apply the following changes to correct the device placement:
For lines 190-193:
- paddle.arange(0, nloc, dtype=nlist.dtype)
- .to(device=nlist.place)
+ paddle.arange(0, nloc, dtype=nlist.dtype, place=nlist.place)
For lines 227-229:
- ).to(device=extended_tensor.place)
+ ).astype(extended_tensor.dtype, place=extended_tensor.place)
For lines 251-253:
- ).to(device=aparam.place),
+ ).astype(aparam.dtype, place=aparam.place),
For lines 357-359:
- elif name in self._buffers:
- return self._buffers[name]
+ else:
+ return super().__getattr__(name)
Also applies to: 227-229, 251-253, 357-359
"""A spin model for energy.""" | ||
|
||
model_type = "ener" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add class variable model_type
to SpinModel
base class.
The model_type
variable is assigned in SpinEnergyModel
, but it might be beneficial to define it in the base class SpinModel
for consistency and to avoid potential attribute errors if other subclasses rely on it.
Add model_type
to the SpinModel
class:
class SpinModel(paddle.nn.Layer):
"""A spin model wrapper, with spin input preprocess and output split."""
+ model_type = "base"
def __init__(
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
"""A spin model for energy.""" | |
model_type = "ener" | |
"""A spin model for energy.""" | |
model_type = "base" | |
nlist_mask = nlist != -1 | ||
nlist[nlist == -1] = 0 | ||
nlist_shift = nlist + nall | ||
nlist[~nlist_mask] = -1 | ||
nlist_shift[~nlist_mask] = -1 | ||
self_real = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid modifying tensors in place when using masks.
Assigning values to tensors using boolean masks can lead to unexpected behavior in PaddlePaddle. Instead of modifying nlist
and nlist_shift
in place, consider creating new tensors to hold the modified values.
Refactor the code as follows:
nlist_mask = nlist != -1
- nlist[nlist == -1] = 0
+ nlist_zeroed = paddle.where(nlist_mask, nlist, paddle.zeros_like(nlist))
nlist_shift = nlist_zeroed + nall
- nlist[~nlist_mask] = -1
- nlist_shift[~nlist_mask] = -1
+ nlist_zeroed = paddle.where(nlist_mask, nlist_zeroed, paddle.full_like(nlist, -1))
+ nlist_shift = paddle.where(nlist_mask, nlist_shift, paddle.full_like(nlist_shift, -1))
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
nlist_mask = nlist != -1 | |
nlist[nlist == -1] = 0 | |
nlist_shift = nlist + nall | |
nlist[~nlist_mask] = -1 | |
nlist_shift[~nlist_mask] = -1 | |
self_real = ( | |
nlist_mask = nlist != -1 | |
nlist_zeroed = paddle.where(nlist_mask, nlist, paddle.zeros_like(nlist)) | |
nlist_shift = nlist_zeroed + nall | |
nlist_zeroed = paddle.where(nlist_mask, nlist_zeroed, paddle.full_like(nlist, -1)) | |
nlist_shift = paddle.where(nlist_mask, nlist_shift, paddle.full_like(nlist_shift, -1)) | |
self_real = ( |
|
||
Parameters | ||
---------- | ||
extended_coord | ||
The extended coordinates of atoms. shape: nf x (nallx3) | ||
extended_atype | ||
The extended aotm types. shape: nf x nall | ||
nlist | ||
The neighbor list. shape: nf x nloc x nnei | ||
mapping | ||
The index mapping, mapps extended region index to local region. | ||
comm_dict | ||
The data needed for communication for parallel inference. | ||
|
||
Returns | ||
------- | ||
descriptor | ||
The descriptor. shape: nf x nloc x (ng x axis_neuron) | ||
gr | ||
The rotationally equivariant and permutationally invariant single particle | ||
representation. shape: nf x nloc x ng x 3 | ||
g2 | ||
The rotationally invariant pair-partical representation. | ||
shape: nf x nloc x nnei x ng | ||
h2 | ||
The rotationally equivariant pair-partical representation. | ||
shape: nf x nloc x nnei x 3 | ||
sw | ||
The smooth switch function. shape: nf x nloc x nnei | ||
|
||
""" | ||
use_three_body = self.use_three_body | ||
nframes, nloc, nnei = nlist.shape | ||
nall = extended_coord.reshape([nframes, -1]).shape[1] // 3 | ||
# nlists | ||
nlist_dict = build_multiple_neighbor_list( | ||
extended_coord, | ||
nlist, | ||
self.rcut_list, | ||
self.nsel_list, | ||
) | ||
# repinit | ||
g1_ext = self.type_embedding(extended_atype) | ||
g1_inp = g1_ext[:, :nloc, :] | ||
g1, _, _, _, _ = self.repinit( | ||
nlist_dict[ | ||
get_multiple_nlist_key(self.repinit.get_rcut(), self.repinit.get_nsel()) | ||
], | ||
extended_coord, | ||
extended_atype, | ||
g1_ext, | ||
mapping, | ||
) | ||
if use_three_body: | ||
assert self.repinit_three_body is not None | ||
g1_three_body, __, __, __, __ = self.repinit_three_body( | ||
nlist_dict[ | ||
get_multiple_nlist_key( | ||
self.repinit_three_body.get_rcut(), | ||
self.repinit_three_body.get_nsel(), | ||
) | ||
], | ||
extended_coord, | ||
extended_atype, | ||
g1_ext, | ||
mapping, | ||
) | ||
g1 = paddle.concat([g1, g1_three_body], axis=-1) | ||
# linear to change shape | ||
g1 = self.g1_shape_tranform(g1) | ||
if self.add_tebd_to_repinit_out: | ||
assert self.tebd_transform is not None | ||
g1 = g1 + self.tebd_transform(g1_inp) | ||
# mapping g1 | ||
if comm_dict is None: | ||
assert mapping is not None | ||
mapping_ext = ( | ||
mapping.reshape([nframes, nall]) | ||
.unsqueeze(-1) | ||
.expand([-1, -1, g1.shape[-1]]) | ||
) | ||
g1_ext = decomp.take_along_axis(g1, mapping_ext, 1) | ||
g1 = g1_ext | ||
# repformer | ||
g1, g2, h2, rot_mat, sw = self.repformers( | ||
nlist_dict[ | ||
get_multiple_nlist_key( | ||
self.repformers.get_rcut(), self.repformers.get_nsel() | ||
) | ||
], | ||
extended_coord, | ||
extended_atype, | ||
g1, | ||
mapping, | ||
comm_dict, | ||
) | ||
if self.concat_output_tebd: | ||
g1 = paddle.concat([g1, g1_inp], axis=-1) | ||
return g1, rot_mat, g2, h2, sw | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure proper handling of optional arguments in the forward
method.
In the forward
method, when comm_dict
is None
, there is an assert
statement to check that mapping
is not None
. Consider providing a clearer error message or handling the case where mapping
might be None
to improve robustness.
descrpt_list = [self.repinit, self.repformers] | ||
if self.use_three_body: | ||
descrpt_list.append(self.repinit_three_body) | ||
for ii, descrpt in enumerate(descrpt_list): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename unused loop variable to underscore.
The loop control variable ii
is not used within the loop body. To indicate that it is intentionally unused, you can rename it to _
or _ii
.
Apply this diff to fix the issue:
-for ii, descrpt in enumerate(descrpt_list):
+for _, descrpt in enumerate(descrpt_list):
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
for ii, descrpt in enumerate(descrpt_list): | |
for _, descrpt in enumerate(descrpt_list): |
🧰 Tools
🪛 Ruff
509-509: Loop control variable ii
not used within loop body
Rename unused ii
to _ii
(B007)
Split <#4157> into several pull requests. 1. Add core modules of paddle backend(`deepmd.pd.*`) and related backend module unitests. 2. Support training/testing/freeze(C++ inference will be supported in subsequent pull request) for example water/se_e2_a. 3. Add se_e2_a related uinttests Related PR to be merged: - [x] <PaddlePaddle/Paddle#69139> ## Accuracy test ### pytorch ![image](https://github.com/user-attachments/assets/cea8f313-4a57-4575-b55a-b6cf577654a2) ### paddle: ``` log deepmd.utils.batch_size Adjust batch size from 1024 to 2048 deepmd.utils.batch_size Adjust batch size from 2048 to 4096 deepmd.entrypoints.test # number of test data : 30 , deepmd.entrypoints.test Energy MAE : 7.467160e-02 eV deepmd.entrypoints.test Energy RMSE : 8.981154e-02 eV deepmd.entrypoints.test Energy MAE/Natoms : 3.889146e-04 eV deepmd.entrypoints.test Energy RMSE/Natoms : 4.677685e-04 eV deepmd.entrypoints.test Force MAE : 4.495974e-02 eV/A deepmd.entrypoints.test Force RMSE : 5.883696e-02 eV/A deepmd.entrypoints.test Virial MAE : 4.683873e+00 eV deepmd.entrypoints.test Virial RMSE : 6.298489e+00 eV deepmd.entrypoints.test Virial MAE/Natoms : 2.439517e-02 eV deepmd.entrypoints.test Virial RMSE/Natoms : 3.280463e-02 eV ``` <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit ## Release Notes - **New Features** - Introduced support for PaddlePaddle in the DeePMD framework, enhancing model training and evaluation capabilities. - Added new backend options and configuration files for multitask models. - Implemented new classes and methods for handling Paddle-specific functionalities, including descriptor calculations and model evaluations. - Enhanced the command-line interface to include Paddle as a backend option. - Expanded the functionality for managing Paddle dependencies and configurations in the testing framework. - **Bug Fixes** - Improved error handling and robustness in various components across the framework. - **Tests** - Expanded the test suite to include Paddle-specific tests, ensuring consistency and reliability across different backends. - Introduced unit tests for new functionalities related to Paddle, including model evaluations and descriptor calculations. - Added tests to validate force gradient calculations and smoothness properties in models. - Implemented tests for neighbor statistics and region transformations, ensuring accuracy in calculations. - **Documentation** - Updated documentation across multiple modules to reflect new features and usage instructions. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: HydrogenSulfate <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Add water/se_e2_a train/test/lammps
TODO:
type_map
) kept in inference model.__format__
for 0-D eager Tensor([eager Tensor] Add__format__
for eager Tensor PaddlePaddle/Paddle#68677)bmm_double_grad
in dygraph mode([Prim] Support bmm_double_grad in eager mode PaddlePaddle/Paddle#68834)tests/pd/
Tensor.to
from inplace-operation to out-of-place operation([API] Change Tensor.to from in-place to a out-of-place operation PaddlePaddle/Paddle#68992)paddle.remainder
([OP] Add remainder_grad for remainder operator PaddlePaddle/Paddle#68961)IndexPutCudaKernel
for thread safe and add index_put_double_grad PaddlePaddle/Paddle#69095)Support water/se_e2_a train(data parallel training)/test/freeze/lammps and se_e2_r train/test/freeze preliminarily
Update related document wiht paddle content.
Summary by CodeRabbit
Summary by CodeRabbit
New Features
BaseAtomicModel
,DPAtomicModel
,DipoleModel
, and more, providing a comprehensive structure for modeling atomic interactions.DescrptBlock
,DescrptDPA1
, andDescrptSeA
, to facilitate advanced atomic representation learning.DeepEval
class for evaluating deep learning models in molecular simulations, enhancing the framework's evaluation capabilities.PropertyLoss
,EnergyStdLoss
, andEnergySpinLoss
, to support various loss calculations in model training.Bug Fixes
Documentation
Tests
Chores