You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The problem is that self.scores can have multiple values that are the max, so which one found is changing.
Possibly useful debugging prints:
diff --git a/ravenframework/Samplers/LimitSurfaceSearch.py b/ravenframework/Samplers/LimitSurfaceSearch.py
index 564cd061a..648d1730d 100644
--- a/ravenframework/Samplers/LimitSurfaceSearch.py+++ b/ravenframework/Samplers/LimitSurfaceSearch.py@@ -637,6 +637,8 @@ class LimitSurfaceSearch(AdaptiveSampler):
# variable
axisNames = [key.replace('<distribution>','') for key in self.axisName]
+ print("start", self.inputInfo)+ print("surfPoint len", len(self.surfPoint),repr(self.batchStrategy))
if self.surfPoint is not None and len(self.surfPoint) > 0:
if self.batchStrategy == 'none':
self.__scoreCandidates()
@@ -644,8 +646,11 @@ class LimitSurfaceSearch(AdaptiveSampler):
for key, value in sorted(self.invPointPersistence.items()):
if key != self.exceptionGrid and self.surfPoint[key] is not None:
localMax = np.max(self.scores[key])
- if localMax > maxDistance:+ if localMax >= maxDistance:
maxDistance, maxGridId, maxId = localMax, key, np.argmax(self.scores[key])
+ print("new max", maxDistance, maxGridId, maxId, self.surfPoint[maxGridId][maxId,:],self.scores[key])+ print("maxes",maxDistance, maxGridId, maxId, self.surfPoint[maxGridId][maxId,:])+ print("at maxes", self.inputInfo)
if maxDistance > 0.0:
for varIndex, _ in enumerate([key.replace('<distribution>','') for key in self.axisName]):
self.values[self.axisName[varIndex]] = copy.copy(float(self.surfPoint[maxGridId][maxId,varIndex]))
@@ -712,6 +717,7 @@ class LimitSurfaceSearch(AdaptiveSampler):
########################################################################
## Select one sample
selectedPoint = self.toProcess.pop()
+ print("selectedPoint", selectedPoint)
for varIndex, varName in enumerate(axisNames):
self.values[self.axisName[varIndex]] = float(selectedPoint[varIndex])
self.inputInfo['SampledVarsPb'][self.axisName[varIndex]] = self.distDict[self.axisName[varIndex]].pdf(self.values[self.axisName[varIndex]])
@@ -732,12 +738,14 @@ class LimitSurfaceSearch(AdaptiveSampler):
########################################################################
## Select one sample
selectedPoint = self.toProcess.pop()
+ print("selectedPoint", selectedPoint)
for varIndex, varName in enumerate(axisNames):
self.values[self.axisName[varIndex]] = float(selectedPoint[varIndex])
self.inputInfo['SampledVarsPb'][self.axisName[varIndex]] = self.distDict[self.axisName[varIndex]].pdf(self.values[self.axisName[varIndex]])
self.inputInfo['ProbabilityWeight-'+self.axisName[varIndex]] = self.distDict[self.axisName[varIndex]].pdf(self.values[self.axisName[varIndex]])
varSet=True
+ print("pre", self.inputInfo)
if not varSet:
#here we are still generating the batch
for key in sorted(self.distDict.keys()):
@@ -757,6 +765,7 @@ class LimitSurfaceSearch(AdaptiveSampler):
self.raiseADebug('At counter '+str(self.counter)+' the generated sampled variables are: '+str(self.values))
self.inputInfo['SamplerType'] = 'LimitSurfaceSearch'
self.inputInfo['subGridTol' ] = self.subGridTol
+ print(self.inputInfo)
# This is the normal derivation to be used later on
# pbMapPointCoord = np.zeros((len(self.surfPoint),self.nVar*2+1,self.nVar))
Steps to Reproduce
Run ./raven_framework tests/framework/Samplers/AdaptiveBatch/test_thick.xml
before and after #1933 is merged.
Expected Behavior
Ideally it would choose the same values every time.
Screenshots and Input Files
No response
OS
Linux
OS Version
No response
Dependency Manager
CONDA
For Change Control Board: Issue Review
Is it tagged with a type: defect or task?
Is it tagged with a priority: critical, normal or minor?
If it will impact requirements or requirements tests, is it tagged with requirements?
If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
For Change Control Board: Issue Closure
If the issue is a defect, is the defect fixed?
If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
If the issue is a defect, does it impact the latest release branch? If yes, is there any issue tagged with release (create if needed)?
If the issue is being closed without a pull request, has an explanation of why it is being closed been provided?
The text was updated successfully, but these errors were encountered:
Thank you for the defect report
RAVEN
.that demonstrates the defect.
Defect Description
When running tests/framework/Samplers/AdaptiveBatch/test_thick.xml
the output changed between python 3.7 and python 3.8.
The code in question is in Samplers/LimitSurfaceSearch.py and it is finding different values for self.surfPoint[maxGridId][maxId,:]
The problem is that
self.scores
can have multiple values that are the max, so which one found is changing.Possibly useful debugging prints:
Steps to Reproduce
Run ./raven_framework tests/framework/Samplers/AdaptiveBatch/test_thick.xml
before and after #1933 is merged.
Expected Behavior
Ideally it would choose the same values every time.
Screenshots and Input Files
No response
OS
Linux
OS Version
No response
Dependency Manager
CONDA
For Change Control Board: Issue Review
For Change Control Board: Issue Closure
The text was updated successfully, but these errors were encountered: